00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 594 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3260 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.060 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.061 The recommended git tool is: git 00:00:00.061 using credential 00000000-0000-0000-0000-000000000002 00:00:00.063 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.084 Fetching changes from the remote Git repository 00:00:00.087 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.131 Using shallow fetch with depth 1 00:00:00.131 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.131 > git --version # timeout=10 00:00:00.184 > git --version # 'git version 2.39.2' 00:00:00.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.233 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.233 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.223 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.235 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.251 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:04.251 > git config core.sparsecheckout # timeout=10 00:00:04.264 > git read-tree -mu HEAD # timeout=10 00:00:04.281 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:04.302 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:04.302 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:04.411 [Pipeline] Start of Pipeline 00:00:04.429 [Pipeline] library 00:00:04.430 Loading library shm_lib@master 00:00:04.431 Library shm_lib@master is cached. Copying from home. 00:00:04.452 [Pipeline] node 00:00:04.461 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.467 [Pipeline] { 00:00:04.475 [Pipeline] catchError 00:00:04.477 [Pipeline] { 00:00:04.488 [Pipeline] wrap 00:00:04.496 [Pipeline] { 00:00:04.503 [Pipeline] stage 00:00:04.505 [Pipeline] { (Prologue) 00:00:04.695 [Pipeline] sh 00:00:04.977 + logger -p user.info -t JENKINS-CI 00:00:05.000 [Pipeline] echo 00:00:05.001 Node: GP8 00:00:05.009 [Pipeline] sh 00:00:05.316 [Pipeline] setCustomBuildProperty 00:00:05.331 [Pipeline] echo 00:00:05.332 Cleanup processes 00:00:05.339 [Pipeline] sh 00:00:05.629 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.629 36425 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.644 [Pipeline] sh 00:00:05.929 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.929 ++ grep -v 'sudo pgrep' 00:00:05.929 ++ awk '{print $1}' 00:00:05.929 + sudo kill -9 00:00:05.929 + true 00:00:05.943 [Pipeline] cleanWs 00:00:05.954 [WS-CLEANUP] Deleting project workspace... 00:00:05.954 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.961 [WS-CLEANUP] done 00:00:05.966 [Pipeline] setCustomBuildProperty 00:00:05.980 [Pipeline] sh 00:00:06.266 + sudo git config --global --replace-all safe.directory '*' 00:00:06.328 [Pipeline] httpRequest 00:00:06.357 [Pipeline] echo 00:00:06.359 Sorcerer 10.211.164.101 is alive 00:00:06.395 [Pipeline] httpRequest 00:00:06.402 HttpMethod: GET 00:00:06.402 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:06.403 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:06.406 Response Code: HTTP/1.1 200 OK 00:00:06.406 Success: Status code 200 is in the accepted range: 200,404 00:00:06.406 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:07.023 [Pipeline] sh 00:00:07.312 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:07.587 [Pipeline] httpRequest 00:00:07.618 [Pipeline] echo 00:00:07.619 Sorcerer 10.211.164.101 is alive 00:00:07.628 [Pipeline] httpRequest 00:00:07.632 HttpMethod: GET 00:00:07.633 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:07.635 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:07.637 Response Code: HTTP/1.1 200 OK 00:00:07.638 Success: Status code 200 is in the accepted range: 200,404 00:00:07.638 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:19.094 [Pipeline] sh 00:00:19.374 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:23.573 [Pipeline] sh 00:00:23.851 + git -C spdk log --oneline -n5 00:00:23.851 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:23.851 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:23.851 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:23.851 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:23.851 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:00:23.866 [Pipeline] withCredentials 00:00:23.877 > git --version # timeout=10 00:00:23.888 > git --version # 'git version 2.39.2' 00:00:23.905 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:23.906 [Pipeline] { 00:00:23.912 [Pipeline] retry 00:00:23.913 [Pipeline] { 00:00:23.924 [Pipeline] sh 00:00:24.206 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:24.478 [Pipeline] } 00:00:24.492 [Pipeline] // retry 00:00:24.497 [Pipeline] } 00:00:24.516 [Pipeline] // withCredentials 00:00:24.526 [Pipeline] httpRequest 00:00:24.544 [Pipeline] echo 00:00:24.546 Sorcerer 10.211.164.101 is alive 00:00:24.552 [Pipeline] httpRequest 00:00:24.557 HttpMethod: GET 00:00:24.557 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:24.558 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:24.560 Response Code: HTTP/1.1 200 OK 00:00:24.561 Success: Status code 200 is in the accepted range: 200,404 00:00:24.561 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:28.505 [Pipeline] sh 00:00:28.787 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:32.999 [Pipeline] sh 00:00:33.283 + git -C dpdk log --oneline -n5 00:00:33.283 eeb0605f11 version: 23.11.0 00:00:33.283 238778122a doc: update release notes for 23.11 00:00:33.283 46aa6b3cfc doc: fix description of RSS features 00:00:33.283 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:33.283 7e421ae345 devtools: support skipping forbid rule check 00:00:33.297 [Pipeline] } 00:00:33.314 [Pipeline] // stage 00:00:33.323 [Pipeline] stage 00:00:33.325 [Pipeline] { (Prepare) 00:00:33.351 [Pipeline] writeFile 00:00:33.372 [Pipeline] sh 00:00:33.654 + logger -p user.info -t JENKINS-CI 00:00:33.667 [Pipeline] sh 00:00:33.988 + logger -p user.info -t JENKINS-CI 00:00:34.000 [Pipeline] sh 00:00:34.279 + cat autorun-spdk.conf 00:00:34.279 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.279 SPDK_TEST_NVMF=1 00:00:34.279 SPDK_TEST_NVME_CLI=1 00:00:34.279 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.279 SPDK_TEST_NVMF_NICS=e810 00:00:34.279 SPDK_TEST_VFIOUSER=1 00:00:34.279 SPDK_RUN_UBSAN=1 00:00:34.279 NET_TYPE=phy 00:00:34.279 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:34.279 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:34.286 RUN_NIGHTLY=1 00:00:34.291 [Pipeline] readFile 00:00:34.317 [Pipeline] withEnv 00:00:34.319 [Pipeline] { 00:00:34.333 [Pipeline] sh 00:00:34.613 + set -ex 00:00:34.613 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:34.613 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:34.613 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.613 ++ SPDK_TEST_NVMF=1 00:00:34.613 ++ SPDK_TEST_NVME_CLI=1 00:00:34.613 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.613 ++ SPDK_TEST_NVMF_NICS=e810 00:00:34.613 ++ SPDK_TEST_VFIOUSER=1 00:00:34.613 ++ SPDK_RUN_UBSAN=1 00:00:34.613 ++ NET_TYPE=phy 00:00:34.613 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:34.613 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:34.613 ++ RUN_NIGHTLY=1 00:00:34.613 + case $SPDK_TEST_NVMF_NICS in 00:00:34.613 + DRIVERS=ice 00:00:34.613 + [[ tcp == \r\d\m\a ]] 00:00:34.613 + [[ -n ice ]] 00:00:34.613 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:34.613 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:34.613 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:34.613 rmmod: ERROR: Module irdma is not currently loaded 00:00:34.613 rmmod: ERROR: Module i40iw is not currently loaded 00:00:34.613 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:34.613 + true 00:00:34.613 + for D in $DRIVERS 00:00:34.613 + sudo modprobe ice 00:00:34.613 + exit 0 00:00:34.625 [Pipeline] } 00:00:34.646 [Pipeline] // withEnv 00:00:34.653 [Pipeline] } 00:00:34.669 [Pipeline] // stage 00:00:34.676 [Pipeline] catchError 00:00:34.677 [Pipeline] { 00:00:34.688 [Pipeline] timeout 00:00:34.689 Timeout set to expire in 50 min 00:00:34.690 [Pipeline] { 00:00:34.702 [Pipeline] stage 00:00:34.704 [Pipeline] { (Tests) 00:00:34.717 [Pipeline] sh 00:00:34.995 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:34.995 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:34.995 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:34.995 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:34.995 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:34.995 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:34.995 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:34.995 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:34.995 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:34.995 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:34.995 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:34.995 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:34.995 + source /etc/os-release 00:00:34.995 ++ NAME='Fedora Linux' 00:00:34.995 ++ VERSION='38 (Cloud Edition)' 00:00:34.995 ++ ID=fedora 00:00:34.995 ++ VERSION_ID=38 00:00:34.995 ++ VERSION_CODENAME= 00:00:34.995 ++ PLATFORM_ID=platform:f38 00:00:34.995 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:34.995 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:34.995 ++ LOGO=fedora-logo-icon 00:00:34.995 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:34.995 ++ HOME_URL=https://fedoraproject.org/ 00:00:34.995 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:34.995 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:34.995 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:34.995 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:34.995 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:34.995 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:34.995 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:34.995 ++ SUPPORT_END=2024-05-14 00:00:34.995 ++ VARIANT='Cloud Edition' 00:00:34.995 ++ VARIANT_ID=cloud 00:00:34.995 + uname -a 00:00:34.995 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:34.995 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:36.402 Hugepages 00:00:36.402 node hugesize free / total 00:00:36.402 node0 1048576kB 0 / 0 00:00:36.402 node0 2048kB 0 / 0 00:00:36.402 node1 1048576kB 0 / 0 00:00:36.402 node1 2048kB 0 / 0 00:00:36.402 00:00:36.402 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:36.402 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:36.402 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:36.402 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:36.402 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:36.402 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:36.402 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:36.402 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:36.402 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:36.402 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:36.402 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:36.402 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:36.402 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:36.402 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:36.402 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:36.402 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:36.402 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:36.402 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:36.402 + rm -f /tmp/spdk-ld-path 00:00:36.402 + source autorun-spdk.conf 00:00:36.402 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.402 ++ SPDK_TEST_NVMF=1 00:00:36.402 ++ SPDK_TEST_NVME_CLI=1 00:00:36.402 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.402 ++ SPDK_TEST_NVMF_NICS=e810 00:00:36.402 ++ SPDK_TEST_VFIOUSER=1 00:00:36.402 ++ SPDK_RUN_UBSAN=1 00:00:36.402 ++ NET_TYPE=phy 00:00:36.402 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:36.402 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:36.402 ++ RUN_NIGHTLY=1 00:00:36.402 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:36.402 + [[ -n '' ]] 00:00:36.402 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.402 + for M in /var/spdk/build-*-manifest.txt 00:00:36.402 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:36.402 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:36.402 + for M in /var/spdk/build-*-manifest.txt 00:00:36.402 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:36.402 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:36.402 ++ uname 00:00:36.402 + [[ Linux == \L\i\n\u\x ]] 00:00:36.402 + sudo dmesg -T 00:00:36.402 + sudo dmesg --clear 00:00:36.402 + dmesg_pid=37127 00:00:36.402 + [[ Fedora Linux == FreeBSD ]] 00:00:36.402 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:36.402 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:36.402 + sudo dmesg -Tw 00:00:36.402 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:36.402 + [[ -x /usr/src/fio-static/fio ]] 00:00:36.402 + export FIO_BIN=/usr/src/fio-static/fio 00:00:36.402 + FIO_BIN=/usr/src/fio-static/fio 00:00:36.402 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:36.402 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:36.402 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:36.402 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:36.402 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:36.402 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:36.402 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:36.402 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:36.402 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:36.402 Test configuration: 00:00:36.402 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.402 SPDK_TEST_NVMF=1 00:00:36.402 SPDK_TEST_NVME_CLI=1 00:00:36.402 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.402 SPDK_TEST_NVMF_NICS=e810 00:00:36.402 SPDK_TEST_VFIOUSER=1 00:00:36.402 SPDK_RUN_UBSAN=1 00:00:36.402 NET_TYPE=phy 00:00:36.402 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:36.402 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:36.402 RUN_NIGHTLY=1 23:13:57 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:36.402 23:13:57 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:36.402 23:13:57 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:36.402 23:13:57 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:36.402 23:13:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.402 23:13:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.402 23:13:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.402 23:13:57 -- paths/export.sh@5 -- $ export PATH 00:00:36.402 23:13:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.402 23:13:57 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:36.402 23:13:57 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:36.402 23:13:57 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720732437.XXXXXX 00:00:36.402 23:13:57 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720732437.kjcqR8 00:00:36.402 23:13:57 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:36.402 23:13:57 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:00:36.402 23:13:57 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:36.402 23:13:57 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:36.402 23:13:57 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:36.402 23:13:57 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:36.402 23:13:57 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:36.402 23:13:57 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:00:36.402 23:13:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.402 23:13:57 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:36.402 23:13:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:36.402 23:13:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:36.402 23:13:57 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.402 23:13:57 -- spdk/autobuild.sh@16 -- $ date -u 00:00:36.402 Thu Jul 11 09:13:57 PM UTC 2024 00:00:36.402 23:13:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:36.402 LTS-59-g4b94202c6 00:00:36.402 23:13:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:36.402 23:13:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:36.402 23:13:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:36.402 23:13:57 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:00:36.402 23:13:57 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:36.402 23:13:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.402 ************************************ 00:00:36.402 START TEST ubsan 00:00:36.402 ************************************ 00:00:36.402 23:13:57 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:00:36.402 using ubsan 00:00:36.402 00:00:36.402 real 0m0.000s 00:00:36.402 user 0m0.000s 00:00:36.402 sys 0m0.000s 00:00:36.402 23:13:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:36.402 23:13:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.402 ************************************ 00:00:36.402 END TEST ubsan 00:00:36.402 ************************************ 00:00:36.402 23:13:57 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:00:36.402 23:13:57 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:36.402 23:13:57 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:36.402 23:13:57 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:00:36.402 23:13:57 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:36.402 23:13:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.402 ************************************ 00:00:36.402 START TEST build_native_dpdk 00:00:36.402 ************************************ 00:00:36.402 23:13:57 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:00:36.402 23:13:57 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:36.402 23:13:57 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:36.402 23:13:57 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:36.402 23:13:57 -- common/autobuild_common.sh@51 -- $ local compiler 00:00:36.402 23:13:57 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:36.402 23:13:57 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:36.402 23:13:57 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:36.402 23:13:57 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:36.402 23:13:57 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:36.402 23:13:57 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:36.402 23:13:57 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:36.402 23:13:57 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:36.402 23:13:57 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:36.402 23:13:57 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:36.403 23:13:57 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:36.403 23:13:57 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:36.403 23:13:57 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:36.403 23:13:57 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:36.403 23:13:57 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.403 23:13:57 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:36.403 eeb0605f11 version: 23.11.0 00:00:36.403 238778122a doc: update release notes for 23.11 00:00:36.403 46aa6b3cfc doc: fix description of RSS features 00:00:36.403 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:36.403 7e421ae345 devtools: support skipping forbid rule check 00:00:36.403 23:13:57 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:36.403 23:13:57 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:36.403 23:13:57 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:00:36.403 23:13:57 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:36.403 23:13:57 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:36.403 23:13:57 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:36.403 23:13:57 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:36.403 23:13:57 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:36.403 23:13:57 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:36.403 23:13:57 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:36.403 23:13:57 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:36.403 23:13:57 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:36.403 23:13:57 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:36.403 23:13:57 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:36.403 23:13:57 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:36.403 23:13:57 -- common/autobuild_common.sh@168 -- $ uname -s 00:00:36.403 23:13:57 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:36.403 23:13:57 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:00:36.403 23:13:57 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:00:36.403 23:13:57 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:00:36.403 23:13:57 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:00:36.403 23:13:57 -- scripts/common.sh@335 -- $ IFS=.-: 00:00:36.403 23:13:57 -- scripts/common.sh@335 -- $ read -ra ver1 00:00:36.403 23:13:57 -- scripts/common.sh@336 -- $ IFS=.-: 00:00:36.403 23:13:57 -- scripts/common.sh@336 -- $ read -ra ver2 00:00:36.403 23:13:57 -- scripts/common.sh@337 -- $ local 'op=<' 00:00:36.403 23:13:57 -- scripts/common.sh@339 -- $ ver1_l=3 00:00:36.403 23:13:57 -- scripts/common.sh@340 -- $ ver2_l=3 00:00:36.403 23:13:57 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:00:36.403 23:13:57 -- scripts/common.sh@343 -- $ case "$op" in 00:00:36.403 23:13:57 -- scripts/common.sh@344 -- $ : 1 00:00:36.403 23:13:57 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:00:36.403 23:13:57 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:36.403 23:13:57 -- scripts/common.sh@364 -- $ decimal 23 00:00:36.403 23:13:57 -- scripts/common.sh@352 -- $ local d=23 00:00:36.403 23:13:57 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:00:36.403 23:13:57 -- scripts/common.sh@354 -- $ echo 23 00:00:36.403 23:13:57 -- scripts/common.sh@364 -- $ ver1[v]=23 00:00:36.403 23:13:57 -- scripts/common.sh@365 -- $ decimal 21 00:00:36.403 23:13:57 -- scripts/common.sh@352 -- $ local d=21 00:00:36.403 23:13:57 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:36.403 23:13:57 -- scripts/common.sh@354 -- $ echo 21 00:00:36.403 23:13:57 -- scripts/common.sh@365 -- $ ver2[v]=21 00:00:36.403 23:13:57 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:00:36.403 23:13:57 -- scripts/common.sh@366 -- $ return 1 00:00:36.403 23:13:57 -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:36.403 patching file config/rte_config.h 00:00:36.403 Hunk #1 succeeded at 60 (offset 1 line). 00:00:36.403 23:13:57 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:00:36.403 23:13:57 -- common/autobuild_common.sh@178 -- $ uname -s 00:00:36.403 23:13:57 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:00:36.403 23:13:57 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:36.403 23:13:57 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:44.529 The Meson build system 00:00:44.529 Version: 1.3.1 00:00:44.529 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:44.529 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:00:44.529 Build type: native build 00:00:44.529 Program cat found: YES (/usr/bin/cat) 00:00:44.529 Project name: DPDK 00:00:44.529 Project version: 23.11.0 00:00:44.529 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:44.529 C linker for the host machine: gcc ld.bfd 2.39-16 00:00:44.529 Host machine cpu family: x86_64 00:00:44.529 Host machine cpu: x86_64 00:00:44.529 Message: ## Building in Developer Mode ## 00:00:44.529 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:44.529 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:00:44.529 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:00:44.529 Program python3 found: YES (/usr/bin/python3) 00:00:44.529 Program cat found: YES (/usr/bin/cat) 00:00:44.529 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:00:44.530 Compiler for C supports arguments -march=native: YES 00:00:44.530 Checking for size of "void *" : 8 00:00:44.530 Checking for size of "void *" : 8 (cached) 00:00:44.530 Library m found: YES 00:00:44.530 Library numa found: YES 00:00:44.530 Has header "numaif.h" : YES 00:00:44.530 Library fdt found: NO 00:00:44.530 Library execinfo found: NO 00:00:44.530 Has header "execinfo.h" : YES 00:00:44.530 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:44.530 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:44.530 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:44.530 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:44.530 Run-time dependency openssl found: YES 3.0.9 00:00:44.530 Run-time dependency libpcap found: YES 1.10.4 00:00:44.530 Has header "pcap.h" with dependency libpcap: YES 00:00:44.530 Compiler for C supports arguments -Wcast-qual: YES 00:00:44.530 Compiler for C supports arguments -Wdeprecated: YES 00:00:44.530 Compiler for C supports arguments -Wformat: YES 00:00:44.530 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:44.530 Compiler for C supports arguments -Wformat-security: NO 00:00:44.530 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:44.530 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:44.530 Compiler for C supports arguments -Wnested-externs: YES 00:00:44.530 Compiler for C supports arguments -Wold-style-definition: YES 00:00:44.530 Compiler for C supports arguments -Wpointer-arith: YES 00:00:44.530 Compiler for C supports arguments -Wsign-compare: YES 00:00:44.530 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:44.530 Compiler for C supports arguments -Wundef: YES 00:00:44.530 Compiler for C supports arguments -Wwrite-strings: YES 00:00:44.530 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:44.530 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:44.530 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:44.530 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:44.530 Program objdump found: YES (/usr/bin/objdump) 00:00:44.530 Compiler for C supports arguments -mavx512f: YES 00:00:44.530 Checking if "AVX512 checking" compiles: YES 00:00:44.530 Fetching value of define "__SSE4_2__" : 1 00:00:44.530 Fetching value of define "__AES__" : 1 00:00:44.530 Fetching value of define "__AVX__" : 1 00:00:44.530 Fetching value of define "__AVX2__" : (undefined) 00:00:44.530 Fetching value of define "__AVX512BW__" : (undefined) 00:00:44.530 Fetching value of define "__AVX512CD__" : (undefined) 00:00:44.530 Fetching value of define "__AVX512DQ__" : (undefined) 00:00:44.530 Fetching value of define "__AVX512F__" : (undefined) 00:00:44.530 Fetching value of define "__AVX512VL__" : (undefined) 00:00:44.530 Fetching value of define "__PCLMUL__" : 1 00:00:44.530 Fetching value of define "__RDRND__" : 1 00:00:44.530 Fetching value of define "__RDSEED__" : (undefined) 00:00:44.530 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:44.530 Fetching value of define "__znver1__" : (undefined) 00:00:44.530 Fetching value of define "__znver2__" : (undefined) 00:00:44.530 Fetching value of define "__znver3__" : (undefined) 00:00:44.530 Fetching value of define "__znver4__" : (undefined) 00:00:44.530 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:44.530 Message: lib/log: Defining dependency "log" 00:00:44.530 Message: lib/kvargs: Defining dependency "kvargs" 00:00:44.530 Message: lib/telemetry: Defining dependency "telemetry" 00:00:44.530 Checking for function "getentropy" : NO 00:00:44.530 Message: lib/eal: Defining dependency "eal" 00:00:44.530 Message: lib/ring: Defining dependency "ring" 00:00:44.530 Message: lib/rcu: Defining dependency "rcu" 00:00:44.530 Message: lib/mempool: Defining dependency "mempool" 00:00:44.530 Message: lib/mbuf: Defining dependency "mbuf" 00:00:44.530 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:44.530 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:44.530 Compiler for C supports arguments -mpclmul: YES 00:00:44.530 Compiler for C supports arguments -maes: YES 00:00:44.530 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:44.530 Compiler for C supports arguments -mavx512bw: YES 00:00:44.530 Compiler for C supports arguments -mavx512dq: YES 00:00:44.530 Compiler for C supports arguments -mavx512vl: YES 00:00:44.530 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:44.530 Compiler for C supports arguments -mavx2: YES 00:00:44.530 Compiler for C supports arguments -mavx: YES 00:00:44.530 Message: lib/net: Defining dependency "net" 00:00:44.530 Message: lib/meter: Defining dependency "meter" 00:00:44.530 Message: lib/ethdev: Defining dependency "ethdev" 00:00:44.530 Message: lib/pci: Defining dependency "pci" 00:00:44.530 Message: lib/cmdline: Defining dependency "cmdline" 00:00:44.530 Message: lib/metrics: Defining dependency "metrics" 00:00:44.530 Message: lib/hash: Defining dependency "hash" 00:00:44.530 Message: lib/timer: Defining dependency "timer" 00:00:44.530 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:44.530 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:00:44.530 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:00:44.530 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:00:44.530 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:00:44.530 Message: lib/acl: Defining dependency "acl" 00:00:44.530 Message: lib/bbdev: Defining dependency "bbdev" 00:00:44.530 Message: lib/bitratestats: Defining dependency "bitratestats" 00:00:44.530 Run-time dependency libelf found: YES 0.190 00:00:44.530 Message: lib/bpf: Defining dependency "bpf" 00:00:44.530 Message: lib/cfgfile: Defining dependency "cfgfile" 00:00:44.530 Message: lib/compressdev: Defining dependency "compressdev" 00:00:44.530 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:44.530 Message: lib/distributor: Defining dependency "distributor" 00:00:44.530 Message: lib/dmadev: Defining dependency "dmadev" 00:00:44.530 Message: lib/efd: Defining dependency "efd" 00:00:44.530 Message: lib/eventdev: Defining dependency "eventdev" 00:00:44.530 Message: lib/dispatcher: Defining dependency "dispatcher" 00:00:44.530 Message: lib/gpudev: Defining dependency "gpudev" 00:00:44.530 Message: lib/gro: Defining dependency "gro" 00:00:44.530 Message: lib/gso: Defining dependency "gso" 00:00:44.530 Message: lib/ip_frag: Defining dependency "ip_frag" 00:00:44.530 Message: lib/jobstats: Defining dependency "jobstats" 00:00:44.530 Message: lib/latencystats: Defining dependency "latencystats" 00:00:44.530 Message: lib/lpm: Defining dependency "lpm" 00:00:44.530 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:44.530 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:00:44.530 Fetching value of define "__AVX512IFMA__" : (undefined) 00:00:44.530 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:00:44.530 Message: lib/member: Defining dependency "member" 00:00:44.530 Message: lib/pcapng: Defining dependency "pcapng" 00:00:44.530 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:44.530 Message: lib/power: Defining dependency "power" 00:00:44.530 Message: lib/rawdev: Defining dependency "rawdev" 00:00:44.530 Message: lib/regexdev: Defining dependency "regexdev" 00:00:44.530 Message: lib/mldev: Defining dependency "mldev" 00:00:44.530 Message: lib/rib: Defining dependency "rib" 00:00:44.530 Message: lib/reorder: Defining dependency "reorder" 00:00:44.530 Message: lib/sched: Defining dependency "sched" 00:00:44.530 Message: lib/security: Defining dependency "security" 00:00:44.530 Message: lib/stack: Defining dependency "stack" 00:00:44.530 Has header "linux/userfaultfd.h" : YES 00:00:44.530 Has header "linux/vduse.h" : YES 00:00:44.530 Message: lib/vhost: Defining dependency "vhost" 00:00:44.530 Message: lib/ipsec: Defining dependency "ipsec" 00:00:44.530 Message: lib/pdcp: Defining dependency "pdcp" 00:00:44.530 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:44.530 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:00:44.530 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:00:44.530 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:44.530 Message: lib/fib: Defining dependency "fib" 00:00:44.530 Message: lib/port: Defining dependency "port" 00:00:44.530 Message: lib/pdump: Defining dependency "pdump" 00:00:44.530 Message: lib/table: Defining dependency "table" 00:00:44.530 Message: lib/pipeline: Defining dependency "pipeline" 00:00:44.530 Message: lib/graph: Defining dependency "graph" 00:00:44.530 Message: lib/node: Defining dependency "node" 00:00:47.074 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:47.074 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:47.074 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:47.074 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:47.074 Compiler for C supports arguments -Wno-sign-compare: YES 00:00:47.074 Compiler for C supports arguments -Wno-unused-value: YES 00:00:47.074 Compiler for C supports arguments -Wno-format: YES 00:00:47.074 Compiler for C supports arguments -Wno-format-security: YES 00:00:47.074 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:00:47.074 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:00:47.074 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:00:47.074 Compiler for C supports arguments -Wno-unused-parameter: YES 00:00:47.074 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:47.074 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:47.074 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:47.074 Compiler for C supports arguments -march=skylake-avx512: YES 00:00:47.074 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:00:47.074 Has header "sys/epoll.h" : YES 00:00:47.074 Program doxygen found: YES (/usr/bin/doxygen) 00:00:47.074 Configuring doxy-api-html.conf using configuration 00:00:47.074 Configuring doxy-api-man.conf using configuration 00:00:47.074 Program mandb found: YES (/usr/bin/mandb) 00:00:47.074 Program sphinx-build found: NO 00:00:47.074 Configuring rte_build_config.h using configuration 00:00:47.074 Message: 00:00:47.074 ================= 00:00:47.074 Applications Enabled 00:00:47.074 ================= 00:00:47.074 00:00:47.074 apps: 00:00:47.074 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:00:47.074 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:00:47.074 test-pmd, test-regex, test-sad, test-security-perf, 00:00:47.074 00:00:47.074 Message: 00:00:47.074 ================= 00:00:47.074 Libraries Enabled 00:00:47.074 ================= 00:00:47.074 00:00:47.074 libs: 00:00:47.074 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:00:47.074 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:00:47.074 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:00:47.074 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:00:47.074 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:00:47.074 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:00:47.074 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:00:47.074 00:00:47.074 00:00:47.074 Message: 00:00:47.074 =============== 00:00:47.074 Drivers Enabled 00:00:47.074 =============== 00:00:47.074 00:00:47.074 common: 00:00:47.074 00:00:47.074 bus: 00:00:47.074 pci, vdev, 00:00:47.074 mempool: 00:00:47.074 ring, 00:00:47.074 dma: 00:00:47.074 00:00:47.074 net: 00:00:47.074 i40e, 00:00:47.074 raw: 00:00:47.074 00:00:47.074 crypto: 00:00:47.074 00:00:47.074 compress: 00:00:47.074 00:00:47.074 regex: 00:00:47.074 00:00:47.074 ml: 00:00:47.074 00:00:47.074 vdpa: 00:00:47.074 00:00:47.074 event: 00:00:47.074 00:00:47.074 baseband: 00:00:47.074 00:00:47.074 gpu: 00:00:47.074 00:00:47.074 00:00:47.074 Message: 00:00:47.074 ================= 00:00:47.074 Content Skipped 00:00:47.074 ================= 00:00:47.074 00:00:47.074 apps: 00:00:47.074 00:00:47.074 libs: 00:00:47.074 00:00:47.074 drivers: 00:00:47.074 common/cpt: not in enabled drivers build config 00:00:47.074 common/dpaax: not in enabled drivers build config 00:00:47.074 common/iavf: not in enabled drivers build config 00:00:47.074 common/idpf: not in enabled drivers build config 00:00:47.074 common/mvep: not in enabled drivers build config 00:00:47.074 common/octeontx: not in enabled drivers build config 00:00:47.074 bus/auxiliary: not in enabled drivers build config 00:00:47.074 bus/cdx: not in enabled drivers build config 00:00:47.074 bus/dpaa: not in enabled drivers build config 00:00:47.074 bus/fslmc: not in enabled drivers build config 00:00:47.074 bus/ifpga: not in enabled drivers build config 00:00:47.074 bus/platform: not in enabled drivers build config 00:00:47.074 bus/vmbus: not in enabled drivers build config 00:00:47.074 common/cnxk: not in enabled drivers build config 00:00:47.074 common/mlx5: not in enabled drivers build config 00:00:47.074 common/nfp: not in enabled drivers build config 00:00:47.074 common/qat: not in enabled drivers build config 00:00:47.074 common/sfc_efx: not in enabled drivers build config 00:00:47.074 mempool/bucket: not in enabled drivers build config 00:00:47.074 mempool/cnxk: not in enabled drivers build config 00:00:47.074 mempool/dpaa: not in enabled drivers build config 00:00:47.074 mempool/dpaa2: not in enabled drivers build config 00:00:47.074 mempool/octeontx: not in enabled drivers build config 00:00:47.074 mempool/stack: not in enabled drivers build config 00:00:47.074 dma/cnxk: not in enabled drivers build config 00:00:47.074 dma/dpaa: not in enabled drivers build config 00:00:47.074 dma/dpaa2: not in enabled drivers build config 00:00:47.074 dma/hisilicon: not in enabled drivers build config 00:00:47.074 dma/idxd: not in enabled drivers build config 00:00:47.074 dma/ioat: not in enabled drivers build config 00:00:47.074 dma/skeleton: not in enabled drivers build config 00:00:47.074 net/af_packet: not in enabled drivers build config 00:00:47.074 net/af_xdp: not in enabled drivers build config 00:00:47.074 net/ark: not in enabled drivers build config 00:00:47.074 net/atlantic: not in enabled drivers build config 00:00:47.074 net/avp: not in enabled drivers build config 00:00:47.074 net/axgbe: not in enabled drivers build config 00:00:47.074 net/bnx2x: not in enabled drivers build config 00:00:47.074 net/bnxt: not in enabled drivers build config 00:00:47.074 net/bonding: not in enabled drivers build config 00:00:47.074 net/cnxk: not in enabled drivers build config 00:00:47.074 net/cpfl: not in enabled drivers build config 00:00:47.074 net/cxgbe: not in enabled drivers build config 00:00:47.074 net/dpaa: not in enabled drivers build config 00:00:47.074 net/dpaa2: not in enabled drivers build config 00:00:47.074 net/e1000: not in enabled drivers build config 00:00:47.074 net/ena: not in enabled drivers build config 00:00:47.074 net/enetc: not in enabled drivers build config 00:00:47.074 net/enetfec: not in enabled drivers build config 00:00:47.074 net/enic: not in enabled drivers build config 00:00:47.074 net/failsafe: not in enabled drivers build config 00:00:47.074 net/fm10k: not in enabled drivers build config 00:00:47.074 net/gve: not in enabled drivers build config 00:00:47.074 net/hinic: not in enabled drivers build config 00:00:47.074 net/hns3: not in enabled drivers build config 00:00:47.074 net/iavf: not in enabled drivers build config 00:00:47.074 net/ice: not in enabled drivers build config 00:00:47.074 net/idpf: not in enabled drivers build config 00:00:47.074 net/igc: not in enabled drivers build config 00:00:47.074 net/ionic: not in enabled drivers build config 00:00:47.074 net/ipn3ke: not in enabled drivers build config 00:00:47.074 net/ixgbe: not in enabled drivers build config 00:00:47.074 net/mana: not in enabled drivers build config 00:00:47.074 net/memif: not in enabled drivers build config 00:00:47.074 net/mlx4: not in enabled drivers build config 00:00:47.074 net/mlx5: not in enabled drivers build config 00:00:47.074 net/mvneta: not in enabled drivers build config 00:00:47.074 net/mvpp2: not in enabled drivers build config 00:00:47.074 net/netvsc: not in enabled drivers build config 00:00:47.074 net/nfb: not in enabled drivers build config 00:00:47.074 net/nfp: not in enabled drivers build config 00:00:47.074 net/ngbe: not in enabled drivers build config 00:00:47.074 net/null: not in enabled drivers build config 00:00:47.074 net/octeontx: not in enabled drivers build config 00:00:47.074 net/octeon_ep: not in enabled drivers build config 00:00:47.074 net/pcap: not in enabled drivers build config 00:00:47.074 net/pfe: not in enabled drivers build config 00:00:47.074 net/qede: not in enabled drivers build config 00:00:47.074 net/ring: not in enabled drivers build config 00:00:47.074 net/sfc: not in enabled drivers build config 00:00:47.074 net/softnic: not in enabled drivers build config 00:00:47.074 net/tap: not in enabled drivers build config 00:00:47.074 net/thunderx: not in enabled drivers build config 00:00:47.074 net/txgbe: not in enabled drivers build config 00:00:47.074 net/vdev_netvsc: not in enabled drivers build config 00:00:47.074 net/vhost: not in enabled drivers build config 00:00:47.074 net/virtio: not in enabled drivers build config 00:00:47.074 net/vmxnet3: not in enabled drivers build config 00:00:47.074 raw/cnxk_bphy: not in enabled drivers build config 00:00:47.074 raw/cnxk_gpio: not in enabled drivers build config 00:00:47.074 raw/dpaa2_cmdif: not in enabled drivers build config 00:00:47.074 raw/ifpga: not in enabled drivers build config 00:00:47.074 raw/ntb: not in enabled drivers build config 00:00:47.074 raw/skeleton: not in enabled drivers build config 00:00:47.074 crypto/armv8: not in enabled drivers build config 00:00:47.074 crypto/bcmfs: not in enabled drivers build config 00:00:47.074 crypto/caam_jr: not in enabled drivers build config 00:00:47.074 crypto/ccp: not in enabled drivers build config 00:00:47.074 crypto/cnxk: not in enabled drivers build config 00:00:47.074 crypto/dpaa_sec: not in enabled drivers build config 00:00:47.074 crypto/dpaa2_sec: not in enabled drivers build config 00:00:47.075 crypto/ipsec_mb: not in enabled drivers build config 00:00:47.075 crypto/mlx5: not in enabled drivers build config 00:00:47.075 crypto/mvsam: not in enabled drivers build config 00:00:47.075 crypto/nitrox: not in enabled drivers build config 00:00:47.075 crypto/null: not in enabled drivers build config 00:00:47.075 crypto/octeontx: not in enabled drivers build config 00:00:47.075 crypto/openssl: not in enabled drivers build config 00:00:47.075 crypto/scheduler: not in enabled drivers build config 00:00:47.075 crypto/uadk: not in enabled drivers build config 00:00:47.075 crypto/virtio: not in enabled drivers build config 00:00:47.075 compress/isal: not in enabled drivers build config 00:00:47.075 compress/mlx5: not in enabled drivers build config 00:00:47.075 compress/octeontx: not in enabled drivers build config 00:00:47.075 compress/zlib: not in enabled drivers build config 00:00:47.075 regex/mlx5: not in enabled drivers build config 00:00:47.075 regex/cn9k: not in enabled drivers build config 00:00:47.075 ml/cnxk: not in enabled drivers build config 00:00:47.075 vdpa/ifc: not in enabled drivers build config 00:00:47.075 vdpa/mlx5: not in enabled drivers build config 00:00:47.075 vdpa/nfp: not in enabled drivers build config 00:00:47.075 vdpa/sfc: not in enabled drivers build config 00:00:47.075 event/cnxk: not in enabled drivers build config 00:00:47.075 event/dlb2: not in enabled drivers build config 00:00:47.075 event/dpaa: not in enabled drivers build config 00:00:47.075 event/dpaa2: not in enabled drivers build config 00:00:47.075 event/dsw: not in enabled drivers build config 00:00:47.075 event/opdl: not in enabled drivers build config 00:00:47.075 event/skeleton: not in enabled drivers build config 00:00:47.075 event/sw: not in enabled drivers build config 00:00:47.075 event/octeontx: not in enabled drivers build config 00:00:47.075 baseband/acc: not in enabled drivers build config 00:00:47.075 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:00:47.075 baseband/fpga_lte_fec: not in enabled drivers build config 00:00:47.075 baseband/la12xx: not in enabled drivers build config 00:00:47.075 baseband/null: not in enabled drivers build config 00:00:47.075 baseband/turbo_sw: not in enabled drivers build config 00:00:47.075 gpu/cuda: not in enabled drivers build config 00:00:47.075 00:00:47.075 00:00:47.075 Build targets in project: 220 00:00:47.075 00:00:47.075 DPDK 23.11.0 00:00:47.075 00:00:47.075 User defined options 00:00:47.075 libdir : lib 00:00:47.075 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:47.075 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:00:47.075 c_link_args : 00:00:47.075 enable_docs : false 00:00:47.075 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:47.075 enable_kmods : false 00:00:47.075 machine : native 00:00:47.075 tests : false 00:00:47.075 00:00:47.075 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:47.075 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:00:47.075 23:14:07 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:00:47.075 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:00:47.075 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:47.075 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:47.075 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:47.075 [4/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:00:47.075 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:47.075 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:47.075 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:47.339 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:47.339 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:47.339 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:47.339 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:47.339 [12/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:47.339 [13/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:47.339 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:47.339 [15/710] Linking static target lib/librte_kvargs.a 00:00:47.339 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:47.339 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:47.339 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:47.339 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:00:47.339 [20/710] Linking static target lib/librte_log.a 00:00:47.600 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:47.600 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:48.181 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:00:48.181 [24/710] Linking target lib/librte_log.so.24.0 00:00:48.181 [25/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:48.181 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:48.181 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:48.181 [28/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:48.442 [29/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:48.442 [30/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:48.442 [31/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:48.442 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:48.442 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:48.442 [34/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:48.442 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:48.442 [36/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:48.442 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:48.442 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:48.442 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:48.442 [40/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:48.442 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:48.442 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:48.442 [43/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:48.442 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:48.442 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:48.442 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:48.442 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:48.442 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:48.442 [49/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:00:48.443 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:48.443 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:48.443 [52/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:48.443 [53/710] Linking target lib/librte_kvargs.so.24.0 00:00:48.443 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:48.443 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:48.443 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:48.443 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:48.443 [58/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:48.707 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:48.707 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:48.707 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:48.707 [62/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:48.707 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:48.707 [64/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:00:48.707 [65/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:48.965 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:48.965 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:49.228 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:49.228 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:49.228 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:49.228 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:00:49.228 [72/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:49.228 [73/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:49.228 [74/710] Linking static target lib/librte_pci.a 00:00:49.228 [75/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:49.228 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:00:49.491 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:00:49.491 [78/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:00:49.491 [79/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:00:49.491 [80/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:49.491 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:00:49.491 [82/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.491 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:00:49.491 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:00:49.491 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:00:49.491 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:00:49.491 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:00:49.491 [88/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:00:49.491 [89/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:00:49.491 [90/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:00:49.491 [91/710] Linking static target lib/librte_ring.a 00:00:49.752 [92/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:00:49.752 [93/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:00:49.752 [94/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:00:49.752 [95/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:00:49.752 [96/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:00:49.752 [97/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:00:49.752 [98/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:00:49.752 [99/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:00:49.752 [100/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:00:49.752 [101/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:00:49.752 [102/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:00:49.752 [103/710] Linking static target lib/librte_meter.a 00:00:50.015 [104/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:00:50.015 [105/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:00:50.015 [106/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:00:50.015 [107/710] Linking static target lib/librte_eal.a 00:00:50.015 [108/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:00:50.015 [109/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:00:50.015 [110/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.015 [111/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:00:50.015 [112/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:00:50.015 [113/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:00:50.015 [114/710] Linking static target lib/librte_telemetry.a 00:00:50.015 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:00:50.015 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:00:50.279 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:00:50.279 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:00:50.279 [119/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.279 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:00:50.279 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:00:50.279 [122/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:00:50.279 [123/710] Linking static target lib/librte_net.a 00:00:50.541 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:00:50.541 [125/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:00:50.541 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:00:50.541 [127/710] Linking static target lib/librte_cmdline.a 00:00:50.806 [128/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:00:50.806 [129/710] Linking static target lib/librte_mempool.a 00:00:50.806 [130/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:00:50.806 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:00:50.806 [132/710] Linking static target lib/librte_cfgfile.a 00:00:50.806 [133/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.806 [134/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:00:50.806 [135/710] Linking target lib/librte_telemetry.so.24.0 00:00:50.806 [136/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.806 [137/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:00:50.806 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:00:50.806 [139/710] Linking static target lib/librte_metrics.a 00:00:51.068 [140/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:00:51.068 [141/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:00:51.068 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:00:51.068 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:00:51.068 [144/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:00:51.068 [145/710] Linking static target lib/librte_bitratestats.a 00:00:51.068 [146/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:00:51.068 [147/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:00:51.068 [148/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:00:51.068 [149/710] Linking static target lib/librte_rcu.a 00:00:51.334 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:00:51.334 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:00:51.334 [152/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:00:51.334 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.334 [154/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:00:51.334 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:00:51.334 [156/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:00:51.334 [157/710] Linking static target lib/librte_timer.a 00:00:51.334 [158/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:00:51.334 [159/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:00:51.334 [160/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:00:51.611 [161/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.611 [162/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.611 [163/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:00:51.611 [164/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.611 [165/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:00:51.611 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:00:51.611 [167/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.872 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:00:51.872 [169/710] Linking static target lib/librte_bbdev.a 00:00:51.872 [170/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.872 [171/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.872 [172/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:00:51.872 [173/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:00:52.134 [174/710] Linking static target lib/librte_compressdev.a 00:00:52.134 [175/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:00:52.134 [176/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:00:52.134 [177/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:00:52.134 [178/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:00:52.395 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:00:52.395 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:00:52.395 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:00:52.395 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:00:52.659 [183/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:00:52.659 [184/710] Linking static target lib/librte_distributor.a 00:00:52.659 [185/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:00:52.659 [186/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.659 [187/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:00:52.659 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:00:52.659 [189/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:00:52.659 [190/710] Linking static target lib/librte_bpf.a 00:00:52.659 [191/710] Linking static target lib/librte_dmadev.a 00:00:52.659 [192/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.920 [193/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:00:52.920 [194/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:00:52.920 [195/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.920 [196/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:00:52.920 [197/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:00:52.920 [198/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:00:52.920 [199/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:00:53.181 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:00:53.181 [201/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:00:53.181 [202/710] Linking static target lib/librte_dispatcher.a 00:00:53.181 [203/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:00:53.181 [204/710] Linking static target lib/librte_gpudev.a 00:00:53.181 [205/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:00:53.181 [206/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:00:53.181 [207/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:00:53.181 [208/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:00:53.181 [209/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:00:53.181 [210/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.443 [211/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:00:53.443 [212/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.443 [213/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:00:53.443 [214/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:00:53.443 [215/710] Linking static target lib/librte_gro.a 00:00:53.709 [216/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:00:53.709 [217/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:00:53.709 [218/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:00:53.709 [219/710] Linking static target lib/librte_jobstats.a 00:00:53.709 [220/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.709 [221/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:00:53.968 [222/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.968 [223/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:00:53.968 [224/710] Linking static target lib/librte_latencystats.a 00:00:53.968 [225/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:00:53.968 [226/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:00:54.255 [227/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:00:54.255 [228/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:00:54.255 [229/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.255 [230/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:00:54.255 [231/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:00:54.255 [232/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:00:54.255 [233/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:00:54.255 [234/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.255 [235/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:00:54.255 [236/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:00:54.540 [237/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:00:54.540 [238/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:00:54.540 [239/710] Linking static target lib/librte_ip_frag.a 00:00:54.540 [240/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:00:54.540 [241/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.540 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:00:54.540 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:00:54.540 [244/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:00:54.817 [245/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:00:54.817 [246/710] Linking static target lib/librte_gso.a 00:00:54.817 [247/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:00:54.817 [248/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:00:54.817 [249/710] Linking static target lib/librte_regexdev.a 00:00:54.817 [250/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.078 [251/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:00:55.078 [252/710] Linking static target lib/librte_rawdev.a 00:00:55.078 [253/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.079 [254/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:00:55.079 [255/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:00:55.079 [256/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:00:55.079 [257/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:00:55.079 [258/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:00:55.341 [259/710] Linking static target lib/acl/libavx2_tmp.a 00:00:55.341 [260/710] Linking static target lib/librte_pcapng.a 00:00:55.341 [261/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:00:55.341 [262/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:00:55.341 [263/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:00:55.341 [264/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:00:55.341 [265/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:00:55.341 [266/710] Linking static target lib/librte_mldev.a 00:00:55.341 [267/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:00:55.341 [268/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:00:55.602 [269/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:00:55.602 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:00:55.602 [271/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:00:55.602 [272/710] Linking static target lib/librte_stack.a 00:00:55.602 [273/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:00:55.602 [274/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:00:55.602 [275/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.602 [276/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:00:55.602 [277/710] Linking static target lib/librte_hash.a 00:00:55.602 [278/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:00:55.602 [279/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.602 [280/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:00:55.602 [281/710] Linking static target lib/librte_lpm.a 00:00:55.868 [282/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:00:55.868 [283/710] Linking static target lib/librte_efd.a 00:00:55.868 [284/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:00:55.868 [285/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.868 [286/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:00:55.868 [287/710] Linking static target lib/librte_power.a 00:00:55.868 [288/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.131 [289/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:00:56.131 [290/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:00:56.131 [291/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.131 [292/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:00:56.397 [293/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.397 [294/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:00:56.397 [295/710] Linking static target lib/librte_reorder.a 00:00:56.397 [296/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:00:56.397 [297/710] Linking static target lib/librte_security.a 00:00:56.397 [298/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:00:56.397 [299/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.397 [300/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:00:56.662 [301/710] Linking static target lib/acl/libavx512_tmp.a 00:00:56.662 [302/710] Linking static target lib/librte_acl.a 00:00:56.662 [303/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:00:56.662 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:00:56.662 [305/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:00:56.662 [306/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:00:56.662 [307/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:00:56.662 [308/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:00:56.662 [309/710] Linking static target lib/librte_mbuf.a 00:00:56.662 [310/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:00:56.662 [311/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:00:56.662 [312/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:00:56.925 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:00:56.925 [314/710] Linking static target lib/librte_rib.a 00:00:56.925 [315/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:00:56.925 [316/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.925 [317/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:00:56.925 [318/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:00:56.925 [319/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.925 [320/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:00:56.925 [321/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:00:56.925 [322/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:00:56.925 [323/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:00:56.925 [324/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.192 [325/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.192 [326/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:00:57.452 [327/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.452 [328/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.452 [329/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:00:57.452 [330/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:00:57.452 [331/710] Linking static target lib/librte_member.a 00:00:57.452 [332/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.715 [333/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:00:57.715 [334/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:00:57.715 [335/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:00:57.978 [336/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:00:57.978 [337/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:00:57.978 [338/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.240 [339/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:00:58.240 [340/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:00:58.240 [341/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:00:58.240 [342/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:00:58.240 [343/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:00:58.240 [344/710] Linking static target lib/librte_eventdev.a 00:00:58.240 [345/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:00:58.503 [346/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:00:58.503 [347/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:00:58.503 [348/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:00:58.503 [349/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:00:58.503 [350/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:00:58.503 [351/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:00:58.503 [352/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:00:58.503 [353/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:00:58.503 [354/710] Linking static target lib/librte_fib.a 00:00:58.766 [355/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:00:58.766 [356/710] Linking static target lib/librte_ethdev.a 00:00:58.766 [357/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:00:58.766 [358/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:00:58.766 [359/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:00:58.766 [360/710] Linking static target lib/librte_cryptodev.a 00:00:58.766 [361/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:00:58.766 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:00:58.766 [363/710] Linking static target lib/librte_sched.a 00:00:58.766 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:00:58.766 [365/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:00:58.766 [366/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:00:59.028 [367/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:00:59.028 [368/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.028 [369/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:00:59.297 [370/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:00:59.297 [371/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:00:59.297 [372/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:00:59.297 [373/710] Linking static target lib/librte_pdump.a 00:00:59.297 [374/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:00:59.297 [375/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:00:59.297 [376/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.555 [377/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:00:59.555 [378/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:00:59.555 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:00:59.555 [380/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:00:59.555 [381/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:00:59.555 [382/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:00:59.555 [383/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:00:59.816 [384/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.816 [385/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:00:59.816 [386/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:00:59.816 [387/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:00:59.816 [388/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:00:59.816 [389/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:00.079 [390/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:00.079 [391/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:00.343 [392/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:00.343 [393/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:00.343 [394/710] Linking static target lib/librte_ipsec.a 00:01:00.343 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:00.343 [396/710] Linking static target lib/librte_table.a 00:01:00.343 [397/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:00.343 [398/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:00.621 [399/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:00.621 [400/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:00.621 [401/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.884 [402/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:00.884 [403/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:00.884 [404/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.884 [405/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:00.884 [406/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.884 [407/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:00.884 [408/710] Linking target lib/librte_eal.so.24.0 00:01:01.147 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:01.147 [410/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:01.147 [411/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:01.147 [412/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:01.147 [413/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:01.147 [414/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:01.147 [415/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:01.147 [416/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:01.411 [417/710] Linking target lib/librte_ring.so.24.0 00:01:01.411 [418/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:01.411 [419/710] Linking target lib/librte_meter.so.24.0 00:01:01.411 [420/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:01.411 [421/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.411 [422/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:01.411 [423/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:01.675 [424/710] Linking target lib/librte_pci.so.24.0 00:01:01.675 [425/710] Linking target lib/librte_timer.so.24.0 00:01:01.675 [426/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:01.675 [427/710] Linking target lib/librte_rcu.so.24.0 00:01:01.675 [428/710] Linking target lib/librte_acl.so.24.0 00:01:01.675 [429/710] Linking target lib/librte_mempool.so.24.0 00:01:01.675 [430/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:01.675 [431/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.675 [432/710] Linking target lib/librte_cfgfile.so.24.0 00:01:01.675 [433/710] Linking target lib/librte_dmadev.so.24.0 00:01:01.675 [434/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:01.675 [435/710] Linking target lib/librte_jobstats.so.24.0 00:01:01.938 [436/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:01.938 [437/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:01.938 [438/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:01.938 [439/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:01.938 [440/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:01.938 [441/710] Linking target lib/librte_stack.so.24.0 00:01:01.938 [442/710] Linking target lib/librte_rawdev.so.24.0 00:01:01.938 [443/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:01.938 [444/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:01.938 [445/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:01.938 [446/710] Linking static target drivers/librte_bus_vdev.a 00:01:01.938 [447/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:01.938 [448/710] Linking target lib/librte_rib.so.24.0 00:01:01.938 [449/710] Linking target lib/librte_mbuf.so.24.0 00:01:01.938 [450/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:01.938 [451/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:02.214 [452/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:02.214 [453/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:02.214 [454/710] Linking static target lib/librte_graph.a 00:01:02.214 [455/710] Linking static target lib/librte_port.a 00:01:02.214 [456/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:02.214 [457/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:02.215 [458/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:02.215 [459/710] Linking target lib/librte_net.so.24.0 00:01:02.215 [460/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:02.215 [461/710] Linking target lib/librte_fib.so.24.0 00:01:02.478 [462/710] Linking target lib/librte_bbdev.so.24.0 00:01:02.478 [463/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:02.478 [464/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.478 [465/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:02.478 [466/710] Linking target lib/librte_distributor.so.24.0 00:01:02.478 [467/710] Linking target lib/librte_compressdev.so.24.0 00:01:02.478 [468/710] Linking target lib/librte_cryptodev.so.24.0 00:01:02.478 [469/710] Linking target lib/librte_gpudev.so.24.0 00:01:02.478 [470/710] Linking target lib/librte_regexdev.so.24.0 00:01:02.478 [471/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:02.478 [472/710] Linking target lib/librte_mldev.so.24.0 00:01:02.478 [473/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:02.478 [474/710] Linking static target drivers/librte_bus_pci.a 00:01:02.478 [475/710] Linking target lib/librte_reorder.so.24.0 00:01:02.478 [476/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:02.478 [477/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:02.478 [478/710] Linking target lib/librte_sched.so.24.0 00:01:02.478 [479/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:02.478 [480/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:02.478 [481/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:02.741 [482/710] Linking target lib/librte_cmdline.so.24.0 00:01:02.741 [483/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:02.741 [484/710] Linking target lib/librte_hash.so.24.0 00:01:02.741 [485/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:02.741 [486/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:02.741 [487/710] Linking target lib/librte_security.so.24.0 00:01:02.741 [488/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:02.741 [489/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:03.009 [490/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:03.009 [491/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:03.009 [492/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:03.009 [493/710] Linking static target drivers/librte_mempool_ring.a 00:01:03.009 [494/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:03.009 [495/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:03.009 [496/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:03.009 [497/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.009 [498/710] Linking target lib/librte_efd.so.24.0 00:01:03.009 [499/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:03.009 [500/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:03.009 [501/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:03.009 [502/710] Linking target lib/librte_lpm.so.24.0 00:01:03.009 [503/710] Linking target lib/librte_member.so.24.0 00:01:03.009 [504/710] Linking target lib/librte_ipsec.so.24.0 00:01:03.272 [505/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.272 [506/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:03.272 [507/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:03.272 [508/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:03.272 [509/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.272 [510/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:03.272 [511/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:03.272 [512/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:03.272 [513/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:03.272 [514/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:03.272 [515/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:03.535 [516/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:03.535 [517/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:03.535 [518/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:03.535 [519/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:03.535 [520/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:03.535 [521/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:03.798 [522/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:03.798 [523/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:04.059 [524/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:04.059 [525/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:04.059 [526/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:04.320 [527/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:04.320 [528/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:04.583 [529/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:04.583 [530/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:04.583 [531/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:04.851 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:04.851 [533/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:04.851 [534/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:04.851 [535/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:04.851 [536/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:04.851 [537/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:05.116 [538/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:05.116 [539/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:05.116 [540/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:05.116 [541/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:05.375 [542/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:05.375 [543/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:05.375 [544/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:05.375 [545/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:05.375 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:05.375 [547/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:05.644 [548/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:05.644 [549/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:05.644 [550/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:05.644 [551/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:05.905 [552/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:05.905 [553/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:05.905 [554/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:05.905 [555/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:05.905 [556/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:05.905 [557/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:06.170 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:06.170 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:06.744 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:06.744 [561/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.744 [562/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:06.744 [563/710] Linking target lib/librte_ethdev.so.24.0 00:01:06.744 [564/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:06.744 [565/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:06.744 [566/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:07.013 [567/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:07.013 [568/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:07.013 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:07.013 [570/710] Linking target lib/librte_metrics.so.24.0 00:01:07.013 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:07.013 [572/710] Linking target lib/librte_bpf.so.24.0 00:01:07.013 [573/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:07.013 [574/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:07.279 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:07.279 [576/710] Linking target lib/librte_gro.so.24.0 00:01:07.279 [577/710] Linking target lib/librte_eventdev.so.24.0 00:01:07.279 [578/710] Linking target lib/librte_gso.so.24.0 00:01:07.279 [579/710] Linking target lib/librte_ip_frag.so.24.0 00:01:07.279 [580/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:07.279 [581/710] Linking target lib/librte_pcapng.so.24.0 00:01:07.279 [582/710] Linking target lib/librte_power.so.24.0 00:01:07.279 [583/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:07.279 [584/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:07.279 [585/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:07.279 [586/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:07.279 [587/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:07.279 [588/710] Linking static target lib/librte_pdcp.a 00:01:07.279 [589/710] Linking target lib/librte_bitratestats.so.24.0 00:01:07.279 [590/710] Linking target lib/librte_latencystats.so.24.0 00:01:07.279 [591/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:07.545 [592/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:07.545 [593/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:07.545 [594/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:07.545 [595/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:07.545 [596/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:07.545 [597/710] Linking target lib/librte_dispatcher.so.24.0 00:01:07.545 [598/710] Linking target lib/librte_pdump.so.24.0 00:01:07.545 [599/710] Linking target lib/librte_graph.so.24.0 00:01:07.545 [600/710] Linking target lib/librte_port.so.24.0 00:01:07.545 [601/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:07.545 [602/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:07.809 [603/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:07.809 [604/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:07.809 [605/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:07.809 [606/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:07.809 [607/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.071 [608/710] Linking target lib/librte_pdcp.so.24.0 00:01:08.071 [609/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:08.071 [610/710] Linking target lib/librte_table.so.24.0 00:01:08.071 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:08.071 [612/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:08.071 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:08.071 [614/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:08.071 [615/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:08.334 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:08.334 [617/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:08.601 [618/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:08.601 [619/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:08.601 [620/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:08.601 [621/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:08.863 [622/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:08.863 [623/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:08.863 [624/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:08.863 [625/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:08.863 [626/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:08.863 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:09.127 [628/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:09.127 [629/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:09.387 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:09.387 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:09.387 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:09.646 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:09.646 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:09.646 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:09.646 [636/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:09.646 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:09.646 [638/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:09.646 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:09.646 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:09.905 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:09.905 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:09.905 [643/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:10.165 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:10.165 [645/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:10.424 [646/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:10.424 [647/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:10.424 [648/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:10.424 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:10.683 [650/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:10.683 [651/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:10.683 [652/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:10.683 [653/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:10.683 [654/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:10.945 [655/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:11.227 [656/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:11.227 [657/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:11.227 [658/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:11.497 [659/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:11.497 [660/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:11.756 [661/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:11.756 [662/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:11.756 [663/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:11.756 [664/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:11.756 [665/710] Linking static target drivers/librte_net_i40e.a 00:01:11.756 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:11.756 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:11.756 [668/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:12.015 [669/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:12.286 [670/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.286 [671/710] Linking target drivers/librte_net_i40e.so.24.0 00:01:12.556 [672/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:13.126 [673/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:13.126 [674/710] Linking static target lib/librte_node.a 00:01:13.387 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.387 [676/710] Linking target lib/librte_node.so.24.0 00:01:14.769 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:15.336 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:15.903 [679/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:16.162 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:17.097 [681/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:35.186 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:21.864 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:21.864 [684/710] Linking static target lib/librte_vhost.a 00:02:21.864 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.864 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:34.105 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:34.105 [688/710] Linking static target lib/librte_pipeline.a 00:02:34.105 [689/710] Linking target app/dpdk-test-acl 00:02:34.105 [690/710] Linking target app/dpdk-test-sad 00:02:34.105 [691/710] Linking target app/dpdk-test-dma-perf 00:02:34.105 [692/710] Linking target app/dpdk-dumpcap 00:02:34.105 [693/710] Linking target app/dpdk-proc-info 00:02:34.105 [694/710] Linking target app/dpdk-test-regex 00:02:34.105 [695/710] Linking target app/dpdk-test-compress-perf 00:02:34.105 [696/710] Linking target app/dpdk-test-cmdline 00:02:34.105 [697/710] Linking target app/dpdk-test-fib 00:02:34.105 [698/710] Linking target app/dpdk-test-security-perf 00:02:34.105 [699/710] Linking target app/dpdk-test-bbdev 00:02:34.105 [700/710] Linking target app/dpdk-test-gpudev 00:02:34.105 [701/710] Linking target app/dpdk-pdump 00:02:34.105 [702/710] Linking target app/dpdk-test-flow-perf 00:02:34.105 [703/710] Linking target app/dpdk-graph 00:02:34.105 [704/710] Linking target app/dpdk-test-pipeline 00:02:34.105 [705/710] Linking target app/dpdk-test-crypto-perf 00:02:34.105 [706/710] Linking target app/dpdk-test-mldev 00:02:34.105 [707/710] Linking target app/dpdk-test-eventdev 00:02:34.105 [708/710] Linking target app/dpdk-testpmd 00:02:37.395 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.395 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:37.395 23:15:58 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:37.395 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:37.395 [0/1] Installing files. 00:02:37.965 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:37.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.230 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:38.231 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:38.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:38.232 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.232 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:38.233 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.615 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.615 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.615 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.615 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.615 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.615 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.615 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.615 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.615 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.615 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.615 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:39.616 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:39.616 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:39.616 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.616 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:39.616 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:39.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:39.620 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:39.620 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:39.620 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:39.620 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:39.620 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:39.620 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:39.620 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:39.620 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:39.620 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:39.620 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:39.620 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:39.620 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:39.620 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:39.620 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:39.620 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:39.620 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:39.620 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:39.620 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:39.620 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:39.620 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:39.620 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:39.620 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:39.620 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:39.620 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:39.620 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:39.620 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:39.620 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:39.620 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:39.620 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:39.620 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:39.620 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:39.620 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:39.620 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:39.620 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:39.620 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:39.620 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:39.620 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:39.620 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:39.620 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:39.620 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:39.620 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:39.620 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:39.620 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:39.620 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:39.620 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:39.620 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:39.620 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:39.620 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:39.620 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:39.620 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:39.620 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:39.620 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:39.620 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:39.620 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:39.620 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:39.620 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:39.621 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:39.621 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:39.621 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:39.621 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:39.621 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:39.621 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:39.621 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:39.621 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:39.621 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:39.621 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:39.621 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:39.621 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:39.621 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:39.621 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:39.621 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:39.621 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:39.621 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:39.621 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:39.621 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:39.621 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:39.621 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:39.621 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:39.621 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:39.621 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:39.621 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:39.621 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:39.621 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:39.621 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:39.621 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:39.621 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:39.621 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:39.621 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:39.621 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:39.621 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:39.621 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:39.621 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:39.621 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:39.621 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:39.621 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:39.621 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:39.621 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:39.621 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:39.621 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:39.621 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:39.621 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:39.621 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:39.621 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:39.621 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:39.621 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:39.621 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:39.621 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:39.621 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:39.621 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:39.621 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:39.621 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:39.621 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:39.621 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:39.621 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:39.621 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:39.621 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:39.621 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:39.621 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:39.621 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:39.621 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:39.621 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:39.621 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:39.621 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:39.621 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:39.621 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:39.621 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:39.621 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:39.621 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:39.621 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:39.621 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:39.621 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:39.621 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:39.621 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:39.880 23:16:00 -- common/autobuild_common.sh@189 -- $ uname -s 00:02:39.880 23:16:00 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:39.880 23:16:00 -- common/autobuild_common.sh@200 -- $ cat 00:02:39.880 23:16:00 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:39.880 00:02:39.880 real 2m3.310s 00:02:39.880 user 20m5.090s 00:02:39.880 sys 2m18.336s 00:02:39.880 23:16:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:39.880 23:16:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.880 ************************************ 00:02:39.880 END TEST build_native_dpdk 00:02:39.880 ************************************ 00:02:39.880 23:16:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:39.880 23:16:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:39.880 23:16:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:39.880 23:16:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:39.880 23:16:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:39.880 23:16:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:39.880 23:16:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:39.880 23:16:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:39.880 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:40.138 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.138 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.138 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:40.704 Using 'verbs' RDMA provider 00:02:56.160 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:03:11.055 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:11.055 Creating mk/config.mk...done. 00:03:11.055 Creating mk/cc.flags.mk...done. 00:03:11.055 Type 'make' to build. 00:03:11.055 23:16:31 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:11.055 23:16:31 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:11.055 23:16:31 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:11.055 23:16:31 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.055 ************************************ 00:03:11.055 START TEST make 00:03:11.055 ************************************ 00:03:11.055 23:16:31 -- common/autotest_common.sh@1104 -- $ make -j48 00:03:11.313 make[1]: Nothing to be done for 'all'. 00:03:12.717 The Meson build system 00:03:12.717 Version: 1.3.1 00:03:12.717 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:12.717 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:12.717 Build type: native build 00:03:12.717 Project name: libvfio-user 00:03:12.717 Project version: 0.0.1 00:03:12.717 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:12.717 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:12.717 Host machine cpu family: x86_64 00:03:12.717 Host machine cpu: x86_64 00:03:12.717 Run-time dependency threads found: YES 00:03:12.717 Library dl found: YES 00:03:12.717 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:12.717 Run-time dependency json-c found: YES 0.17 00:03:12.717 Run-time dependency cmocka found: YES 1.1.7 00:03:12.717 Program pytest-3 found: NO 00:03:12.717 Program flake8 found: NO 00:03:12.717 Program misspell-fixer found: NO 00:03:12.717 Program restructuredtext-lint found: NO 00:03:12.717 Program valgrind found: YES (/usr/bin/valgrind) 00:03:12.717 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:12.717 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:12.717 Compiler for C supports arguments -Wwrite-strings: YES 00:03:12.717 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:12.717 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:12.717 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:12.718 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:12.718 Build targets in project: 8 00:03:12.718 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:12.718 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:12.718 00:03:12.718 libvfio-user 0.0.1 00:03:12.718 00:03:12.718 User defined options 00:03:12.718 buildtype : debug 00:03:12.718 default_library: shared 00:03:12.718 libdir : /usr/local/lib 00:03:12.718 00:03:12.718 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:13.679 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:13.987 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:13.987 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:13.987 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:13.987 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:13.987 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:13.987 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:13.987 [7/37] Compiling C object samples/null.p/null.c.o 00:03:13.987 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:13.987 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:13.987 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:13.987 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:13.987 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:13.987 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:13.987 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:13.987 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:13.987 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:13.987 [17/37] Compiling C object samples/server.p/server.c.o 00:03:13.987 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:13.987 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:13.987 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:13.987 [21/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:13.987 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:14.255 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:14.255 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:14.255 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:14.255 [26/37] Compiling C object samples/client.p/client.c.o 00:03:14.255 [27/37] Linking target samples/client 00:03:14.255 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:14.255 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:14.255 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:14.255 [31/37] Linking target test/unit_tests 00:03:14.520 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:14.520 [33/37] Linking target samples/lspci 00:03:14.520 [34/37] Linking target samples/gpio-pci-idio-16 00:03:14.520 [35/37] Linking target samples/server 00:03:14.520 [36/37] Linking target samples/null 00:03:14.520 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:14.520 INFO: autodetecting backend as ninja 00:03:14.520 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:14.787 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:15.358 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:15.358 ninja: no work to do. 00:03:33.457 CC lib/ut/ut.o 00:03:33.457 CC lib/ut_mock/mock.o 00:03:33.457 CC lib/log/log.o 00:03:33.457 CC lib/log/log_flags.o 00:03:33.457 CC lib/log/log_deprecated.o 00:03:33.457 LIB libspdk_ut.a 00:03:33.457 LIB libspdk_ut_mock.a 00:03:33.457 SO libspdk_ut_mock.so.5.0 00:03:33.457 LIB libspdk_log.a 00:03:33.457 SO libspdk_ut.so.1.0 00:03:33.457 SO libspdk_log.so.6.1 00:03:33.457 SYMLINK libspdk_ut.so 00:03:33.457 SYMLINK libspdk_ut_mock.so 00:03:33.457 SYMLINK libspdk_log.so 00:03:33.457 CC lib/dma/dma.o 00:03:33.457 CXX lib/trace_parser/trace.o 00:03:33.457 CC lib/ioat/ioat.o 00:03:33.457 CC lib/util/bit_array.o 00:03:33.457 CC lib/util/base64.o 00:03:33.457 CC lib/util/cpuset.o 00:03:33.457 CC lib/util/crc16.o 00:03:33.457 CC lib/util/crc32.o 00:03:33.457 CC lib/util/crc32c.o 00:03:33.457 CC lib/util/crc32_ieee.o 00:03:33.457 CC lib/util/crc64.o 00:03:33.457 CC lib/util/dif.o 00:03:33.457 CC lib/util/fd.o 00:03:33.457 CC lib/util/file.o 00:03:33.457 CC lib/util/hexlify.o 00:03:33.457 CC lib/util/iov.o 00:03:33.457 CC lib/util/math.o 00:03:33.457 CC lib/util/pipe.o 00:03:33.457 CC lib/util/strerror_tls.o 00:03:33.457 CC lib/util/string.o 00:03:33.457 CC lib/util/uuid.o 00:03:33.457 CC lib/util/fd_group.o 00:03:33.457 CC lib/util/xor.o 00:03:33.457 CC lib/util/zipf.o 00:03:33.457 CC lib/vfio_user/host/vfio_user_pci.o 00:03:33.457 CC lib/vfio_user/host/vfio_user.o 00:03:33.457 LIB libspdk_dma.a 00:03:33.457 SO libspdk_dma.so.3.0 00:03:33.457 LIB libspdk_ioat.a 00:03:33.457 SYMLINK libspdk_dma.so 00:03:33.457 SO libspdk_ioat.so.6.0 00:03:33.457 SYMLINK libspdk_ioat.so 00:03:33.457 LIB libspdk_vfio_user.a 00:03:33.457 SO libspdk_vfio_user.so.4.0 00:03:33.457 SYMLINK libspdk_vfio_user.so 00:03:33.457 LIB libspdk_util.a 00:03:33.457 SO libspdk_util.so.8.0 00:03:33.717 SYMLINK libspdk_util.so 00:03:33.717 CC lib/rdma/rdma_verbs.o 00:03:33.717 CC lib/rdma/common.o 00:03:33.717 CC lib/vmd/vmd.o 00:03:33.717 CC lib/conf/conf.o 00:03:33.717 CC lib/vmd/led.o 00:03:33.717 CC lib/env_dpdk/env.o 00:03:33.717 CC lib/env_dpdk/memory.o 00:03:33.717 CC lib/env_dpdk/pci.o 00:03:33.717 CC lib/env_dpdk/init.o 00:03:33.717 CC lib/env_dpdk/threads.o 00:03:33.717 CC lib/env_dpdk/pci_ioat.o 00:03:33.717 CC lib/env_dpdk/pci_virtio.o 00:03:33.717 CC lib/env_dpdk/pci_vmd.o 00:03:33.717 CC lib/env_dpdk/pci_idxd.o 00:03:33.717 CC lib/env_dpdk/pci_event.o 00:03:33.717 CC lib/env_dpdk/sigbus_handler.o 00:03:33.717 CC lib/env_dpdk/pci_dpdk.o 00:03:33.717 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:33.717 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:33.717 CC lib/json/json_parse.o 00:03:33.717 CC lib/json/json_util.o 00:03:33.717 CC lib/json/json_write.o 00:03:33.717 CC lib/idxd/idxd.o 00:03:33.717 CC lib/idxd/idxd_user.o 00:03:33.717 CC lib/idxd/idxd_kernel.o 00:03:33.976 LIB libspdk_conf.a 00:03:33.976 SO libspdk_conf.so.5.0 00:03:33.976 LIB libspdk_trace_parser.a 00:03:33.976 LIB libspdk_rdma.a 00:03:34.236 SO libspdk_trace_parser.so.4.0 00:03:34.236 SO libspdk_rdma.so.5.0 00:03:34.236 SYMLINK libspdk_conf.so 00:03:34.236 LIB libspdk_json.a 00:03:34.236 SYMLINK libspdk_rdma.so 00:03:34.236 SO libspdk_json.so.5.1 00:03:34.236 SYMLINK libspdk_trace_parser.so 00:03:34.236 SYMLINK libspdk_json.so 00:03:34.495 CC lib/jsonrpc/jsonrpc_server.o 00:03:34.495 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:34.495 CC lib/jsonrpc/jsonrpc_client.o 00:03:34.495 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:34.756 LIB libspdk_idxd.a 00:03:34.756 LIB libspdk_vmd.a 00:03:34.756 SO libspdk_idxd.so.11.0 00:03:34.756 SO libspdk_vmd.so.5.0 00:03:34.756 LIB libspdk_jsonrpc.a 00:03:34.756 SYMLINK libspdk_vmd.so 00:03:34.756 SYMLINK libspdk_idxd.so 00:03:34.756 SO libspdk_jsonrpc.so.5.1 00:03:34.756 SYMLINK libspdk_jsonrpc.so 00:03:35.016 CC lib/rpc/rpc.o 00:03:35.016 LIB libspdk_rpc.a 00:03:35.276 SO libspdk_rpc.so.5.0 00:03:35.276 SYMLINK libspdk_rpc.so 00:03:35.276 CC lib/trace/trace.o 00:03:35.276 CC lib/trace/trace_rpc.o 00:03:35.276 CC lib/trace/trace_flags.o 00:03:35.276 CC lib/sock/sock.o 00:03:35.276 CC lib/sock/sock_rpc.o 00:03:35.276 CC lib/notify/notify.o 00:03:35.276 CC lib/notify/notify_rpc.o 00:03:35.541 LIB libspdk_notify.a 00:03:35.541 SO libspdk_notify.so.5.0 00:03:35.801 LIB libspdk_trace.a 00:03:35.801 SYMLINK libspdk_notify.so 00:03:35.801 SO libspdk_trace.so.9.0 00:03:35.801 SYMLINK libspdk_trace.so 00:03:35.801 LIB libspdk_sock.a 00:03:35.801 SO libspdk_sock.so.8.0 00:03:35.801 SYMLINK libspdk_sock.so 00:03:36.061 CC lib/thread/thread.o 00:03:36.061 CC lib/thread/iobuf.o 00:03:36.061 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:36.061 CC lib/nvme/nvme_ctrlr.o 00:03:36.061 CC lib/nvme/nvme_fabric.o 00:03:36.061 CC lib/nvme/nvme_ns_cmd.o 00:03:36.061 CC lib/nvme/nvme_ns.o 00:03:36.061 CC lib/nvme/nvme_pcie_common.o 00:03:36.061 CC lib/nvme/nvme_pcie.o 00:03:36.061 CC lib/nvme/nvme_qpair.o 00:03:36.061 CC lib/nvme/nvme.o 00:03:36.061 CC lib/nvme/nvme_quirks.o 00:03:36.061 CC lib/nvme/nvme_transport.o 00:03:36.061 CC lib/nvme/nvme_discovery.o 00:03:36.061 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:36.061 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:36.061 CC lib/nvme/nvme_tcp.o 00:03:36.061 CC lib/nvme/nvme_io_msg.o 00:03:36.061 CC lib/nvme/nvme_opal.o 00:03:36.061 CC lib/nvme/nvme_poll_group.o 00:03:36.061 CC lib/nvme/nvme_zns.o 00:03:36.061 CC lib/nvme/nvme_cuse.o 00:03:36.061 CC lib/nvme/nvme_vfio_user.o 00:03:36.061 CC lib/nvme/nvme_rdma.o 00:03:36.061 LIB libspdk_env_dpdk.a 00:03:36.320 SO libspdk_env_dpdk.so.13.0 00:03:36.320 SYMLINK libspdk_env_dpdk.so 00:03:37.699 LIB libspdk_thread.a 00:03:37.699 SO libspdk_thread.so.9.0 00:03:37.699 SYMLINK libspdk_thread.so 00:03:37.957 CC lib/accel/accel.o 00:03:37.957 CC lib/accel/accel_sw.o 00:03:37.957 CC lib/accel/accel_rpc.o 00:03:37.957 CC lib/virtio/virtio.o 00:03:37.957 CC lib/virtio/virtio_vhost_user.o 00:03:37.957 CC lib/virtio/virtio_vfio_user.o 00:03:37.957 CC lib/virtio/virtio_pci.o 00:03:37.957 CC lib/vfu_tgt/tgt_endpoint.o 00:03:37.957 CC lib/vfu_tgt/tgt_rpc.o 00:03:37.957 CC lib/blob/blobstore.o 00:03:37.957 CC lib/blob/request.o 00:03:37.957 CC lib/init/json_config.o 00:03:37.957 CC lib/blob/zeroes.o 00:03:37.957 CC lib/init/subsystem.o 00:03:37.957 CC lib/init/subsystem_rpc.o 00:03:37.957 CC lib/blob/blob_bs_dev.o 00:03:37.957 CC lib/init/rpc.o 00:03:38.216 LIB libspdk_init.a 00:03:38.216 SO libspdk_init.so.4.0 00:03:38.216 LIB libspdk_vfu_tgt.a 00:03:38.216 SO libspdk_vfu_tgt.so.2.0 00:03:38.216 LIB libspdk_virtio.a 00:03:38.216 SYMLINK libspdk_init.so 00:03:38.477 SYMLINK libspdk_vfu_tgt.so 00:03:38.477 SO libspdk_virtio.so.6.0 00:03:38.477 LIB libspdk_nvme.a 00:03:38.477 SYMLINK libspdk_virtio.so 00:03:38.477 CC lib/event/app.o 00:03:38.477 CC lib/event/reactor.o 00:03:38.477 CC lib/event/log_rpc.o 00:03:38.477 CC lib/event/app_rpc.o 00:03:38.477 CC lib/event/scheduler_static.o 00:03:38.738 SO libspdk_nvme.so.12.0 00:03:38.999 SYMLINK libspdk_nvme.so 00:03:38.999 LIB libspdk_accel.a 00:03:38.999 SO libspdk_accel.so.14.0 00:03:39.257 SYMLINK libspdk_accel.so 00:03:39.257 LIB libspdk_event.a 00:03:39.257 CC lib/bdev/bdev.o 00:03:39.257 SO libspdk_event.so.12.0 00:03:39.257 CC lib/bdev/bdev_rpc.o 00:03:39.257 CC lib/bdev/bdev_zone.o 00:03:39.257 CC lib/bdev/part.o 00:03:39.257 CC lib/bdev/scsi_nvme.o 00:03:39.516 SYMLINK libspdk_event.so 00:03:44.796 LIB libspdk_blob.a 00:03:44.796 SO libspdk_blob.so.10.1 00:03:44.796 SYMLINK libspdk_blob.so 00:03:44.796 CC lib/blobfs/blobfs.o 00:03:44.796 CC lib/blobfs/tree.o 00:03:44.796 CC lib/lvol/lvol.o 00:03:44.796 LIB libspdk_bdev.a 00:03:44.796 SO libspdk_bdev.so.14.0 00:03:45.064 SYMLINK libspdk_bdev.so 00:03:45.064 CC lib/scsi/dev.o 00:03:45.064 CC lib/nvmf/ctrlr.o 00:03:45.064 CC lib/nvmf/ctrlr_discovery.o 00:03:45.064 CC lib/scsi/lun.o 00:03:45.064 CC lib/ublk/ublk.o 00:03:45.064 CC lib/nbd/nbd.o 00:03:45.064 CC lib/nvmf/ctrlr_bdev.o 00:03:45.064 CC lib/ftl/ftl_core.o 00:03:45.064 CC lib/scsi/port.o 00:03:45.064 CC lib/nvmf/subsystem.o 00:03:45.064 CC lib/ublk/ublk_rpc.o 00:03:45.064 CC lib/nvmf/nvmf.o 00:03:45.064 CC lib/scsi/scsi.o 00:03:45.064 CC lib/nbd/nbd_rpc.o 00:03:45.064 CC lib/ftl/ftl_init.o 00:03:45.064 CC lib/ftl/ftl_layout.o 00:03:45.064 CC lib/scsi/scsi_bdev.o 00:03:45.064 CC lib/nvmf/nvmf_rpc.o 00:03:45.064 CC lib/ftl/ftl_debug.o 00:03:45.064 CC lib/scsi/scsi_pr.o 00:03:45.064 CC lib/nvmf/transport.o 00:03:45.064 CC lib/nvmf/tcp.o 00:03:45.064 CC lib/ftl/ftl_io.o 00:03:45.064 CC lib/nvmf/vfio_user.o 00:03:45.064 CC lib/scsi/scsi_rpc.o 00:03:45.064 CC lib/ftl/ftl_sb.o 00:03:45.064 CC lib/scsi/task.o 00:03:45.065 CC lib/nvmf/rdma.o 00:03:45.065 CC lib/ftl/ftl_l2p.o 00:03:45.065 CC lib/ftl/ftl_l2p_flat.o 00:03:45.065 CC lib/ftl/ftl_nv_cache.o 00:03:45.065 CC lib/ftl/ftl_band.o 00:03:45.065 CC lib/ftl/ftl_band_ops.o 00:03:45.065 CC lib/ftl/ftl_writer.o 00:03:45.065 CC lib/ftl/ftl_rq.o 00:03:45.065 CC lib/ftl/ftl_reloc.o 00:03:45.065 CC lib/ftl/ftl_l2p_cache.o 00:03:45.065 CC lib/ftl/ftl_p2l.o 00:03:45.065 CC lib/ftl/mngt/ftl_mngt.o 00:03:45.065 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:45.065 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:45.065 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:45.065 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:45.065 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:45.065 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:45.065 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:45.326 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:45.326 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:45.326 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:45.591 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:45.591 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:45.591 CC lib/ftl/utils/ftl_conf.o 00:03:45.591 CC lib/ftl/utils/ftl_md.o 00:03:45.591 CC lib/ftl/utils/ftl_mempool.o 00:03:45.591 CC lib/ftl/utils/ftl_bitmap.o 00:03:45.591 CC lib/ftl/utils/ftl_property.o 00:03:45.591 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:45.591 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:45.591 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:45.591 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:45.591 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:45.591 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:45.591 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:45.591 LIB libspdk_blobfs.a 00:03:45.591 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:45.591 SO libspdk_blobfs.so.9.0 00:03:45.591 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:45.591 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:45.591 CC lib/ftl/base/ftl_base_dev.o 00:03:45.591 CC lib/ftl/base/ftl_base_bdev.o 00:03:45.856 CC lib/ftl/ftl_trace.o 00:03:45.856 SYMLINK libspdk_blobfs.so 00:03:45.856 LIB libspdk_lvol.a 00:03:45.856 SO libspdk_lvol.so.9.1 00:03:45.856 LIB libspdk_nbd.a 00:03:45.856 SYMLINK libspdk_lvol.so 00:03:45.856 SO libspdk_nbd.so.6.0 00:03:46.116 LIB libspdk_scsi.a 00:03:46.116 SYMLINK libspdk_nbd.so 00:03:46.116 SO libspdk_scsi.so.8.0 00:03:46.116 LIB libspdk_ublk.a 00:03:46.116 SO libspdk_ublk.so.2.0 00:03:46.116 SYMLINK libspdk_scsi.so 00:03:46.116 SYMLINK libspdk_ublk.so 00:03:46.375 CC lib/iscsi/conn.o 00:03:46.375 CC lib/iscsi/init_grp.o 00:03:46.375 CC lib/iscsi/iscsi.o 00:03:46.375 CC lib/iscsi/md5.o 00:03:46.375 CC lib/iscsi/param.o 00:03:46.375 CC lib/iscsi/portal_grp.o 00:03:46.375 CC lib/iscsi/tgt_node.o 00:03:46.375 CC lib/vhost/vhost.o 00:03:46.375 CC lib/iscsi/iscsi_subsystem.o 00:03:46.375 CC lib/vhost/vhost_rpc.o 00:03:46.375 CC lib/iscsi/iscsi_rpc.o 00:03:46.375 CC lib/vhost/vhost_scsi.o 00:03:46.375 CC lib/iscsi/task.o 00:03:46.375 CC lib/vhost/vhost_blk.o 00:03:46.375 CC lib/vhost/rte_vhost_user.o 00:03:46.375 LIB libspdk_ftl.a 00:03:46.635 SO libspdk_ftl.so.8.0 00:03:47.204 SYMLINK libspdk_ftl.so 00:03:48.204 LIB libspdk_vhost.a 00:03:48.204 SO libspdk_vhost.so.7.1 00:03:48.204 LIB libspdk_nvmf.a 00:03:48.204 SYMLINK libspdk_vhost.so 00:03:48.204 SO libspdk_nvmf.so.17.0 00:03:48.464 SYMLINK libspdk_nvmf.so 00:03:48.464 LIB libspdk_iscsi.a 00:03:48.724 SO libspdk_iscsi.so.7.0 00:03:48.724 SYMLINK libspdk_iscsi.so 00:03:48.984 CC module/env_dpdk/env_dpdk_rpc.o 00:03:48.984 CC module/vfu_device/vfu_virtio.o 00:03:48.984 CC module/vfu_device/vfu_virtio_blk.o 00:03:48.984 CC module/vfu_device/vfu_virtio_scsi.o 00:03:48.984 CC module/vfu_device/vfu_virtio_rpc.o 00:03:48.984 CC module/blob/bdev/blob_bdev.o 00:03:48.984 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:48.984 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:48.984 CC module/accel/dsa/accel_dsa.o 00:03:48.984 CC module/sock/posix/posix.o 00:03:48.984 CC module/accel/dsa/accel_dsa_rpc.o 00:03:48.984 CC module/accel/error/accel_error.o 00:03:48.984 CC module/accel/error/accel_error_rpc.o 00:03:48.984 CC module/scheduler/gscheduler/gscheduler.o 00:03:48.984 CC module/accel/ioat/accel_ioat.o 00:03:48.984 CC module/accel/ioat/accel_ioat_rpc.o 00:03:48.984 CC module/accel/iaa/accel_iaa.o 00:03:48.984 CC module/accel/iaa/accel_iaa_rpc.o 00:03:49.242 LIB libspdk_env_dpdk_rpc.a 00:03:49.242 SO libspdk_env_dpdk_rpc.so.5.0 00:03:49.242 LIB libspdk_scheduler_dpdk_governor.a 00:03:49.242 SYMLINK libspdk_env_dpdk_rpc.so 00:03:49.242 LIB libspdk_scheduler_gscheduler.a 00:03:49.242 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:49.242 LIB libspdk_accel_ioat.a 00:03:49.242 SO libspdk_scheduler_gscheduler.so.3.0 00:03:49.242 LIB libspdk_accel_iaa.a 00:03:49.242 LIB libspdk_scheduler_dynamic.a 00:03:49.242 LIB libspdk_accel_error.a 00:03:49.242 SO libspdk_accel_ioat.so.5.0 00:03:49.502 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:49.502 SO libspdk_scheduler_dynamic.so.3.0 00:03:49.502 SO libspdk_accel_iaa.so.2.0 00:03:49.502 SO libspdk_accel_error.so.1.0 00:03:49.502 SYMLINK libspdk_scheduler_gscheduler.so 00:03:49.502 SYMLINK libspdk_accel_ioat.so 00:03:49.502 LIB libspdk_accel_dsa.a 00:03:49.502 SYMLINK libspdk_scheduler_dynamic.so 00:03:49.502 SYMLINK libspdk_accel_error.so 00:03:49.502 SYMLINK libspdk_accel_iaa.so 00:03:49.502 LIB libspdk_blob_bdev.a 00:03:49.502 SO libspdk_accel_dsa.so.4.0 00:03:49.502 SO libspdk_blob_bdev.so.10.1 00:03:49.502 SYMLINK libspdk_accel_dsa.so 00:03:49.502 SYMLINK libspdk_blob_bdev.so 00:03:49.761 CC module/bdev/null/bdev_null.o 00:03:49.761 CC module/bdev/null/bdev_null_rpc.o 00:03:49.761 CC module/bdev/malloc/bdev_malloc.o 00:03:49.761 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:49.761 CC module/bdev/error/vbdev_error.o 00:03:49.761 CC module/bdev/error/vbdev_error_rpc.o 00:03:49.761 CC module/bdev/lvol/vbdev_lvol.o 00:03:49.761 CC module/bdev/delay/vbdev_delay.o 00:03:49.761 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:49.761 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:49.761 CC module/bdev/nvme/bdev_nvme.o 00:03:49.761 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:49.761 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:49.761 CC module/bdev/gpt/gpt.o 00:03:49.761 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:49.761 CC module/bdev/nvme/nvme_rpc.o 00:03:49.761 CC module/bdev/gpt/vbdev_gpt.o 00:03:49.761 CC module/bdev/raid/bdev_raid.o 00:03:49.761 CC module/bdev/passthru/vbdev_passthru.o 00:03:49.761 CC module/bdev/ftl/bdev_ftl.o 00:03:49.761 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:49.761 CC module/bdev/raid/bdev_raid_rpc.o 00:03:49.761 CC module/bdev/nvme/bdev_mdns_client.o 00:03:49.761 CC module/bdev/raid/bdev_raid_sb.o 00:03:49.761 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:49.761 CC module/blobfs/bdev/blobfs_bdev.o 00:03:49.761 CC module/bdev/raid/raid0.o 00:03:49.761 CC module/bdev/nvme/vbdev_opal.o 00:03:49.761 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:49.761 CC module/bdev/raid/raid1.o 00:03:49.761 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:49.761 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:49.761 CC module/bdev/raid/concat.o 00:03:49.761 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:49.761 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:49.761 CC module/bdev/split/vbdev_split.o 00:03:49.761 CC module/bdev/aio/bdev_aio.o 00:03:49.761 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:49.761 CC module/bdev/split/vbdev_split_rpc.o 00:03:49.761 CC module/bdev/aio/bdev_aio_rpc.o 00:03:49.761 CC module/bdev/iscsi/bdev_iscsi.o 00:03:49.761 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:49.761 LIB libspdk_vfu_device.a 00:03:50.018 SO libspdk_vfu_device.so.2.0 00:03:50.018 SYMLINK libspdk_vfu_device.so 00:03:50.018 LIB libspdk_blobfs_bdev.a 00:03:50.276 SO libspdk_blobfs_bdev.so.5.0 00:03:50.276 LIB libspdk_sock_posix.a 00:03:50.276 SO libspdk_sock_posix.so.5.0 00:03:50.276 LIB libspdk_bdev_split.a 00:03:50.276 SYMLINK libspdk_blobfs_bdev.so 00:03:50.276 SO libspdk_bdev_split.so.5.0 00:03:50.276 LIB libspdk_bdev_null.a 00:03:50.276 SYMLINK libspdk_sock_posix.so 00:03:50.276 SO libspdk_bdev_null.so.5.0 00:03:50.276 SYMLINK libspdk_bdev_split.so 00:03:50.276 LIB libspdk_bdev_malloc.a 00:03:50.276 LIB libspdk_bdev_gpt.a 00:03:50.276 SO libspdk_bdev_malloc.so.5.0 00:03:50.534 LIB libspdk_bdev_passthru.a 00:03:50.534 LIB libspdk_bdev_aio.a 00:03:50.534 LIB libspdk_bdev_error.a 00:03:50.534 SYMLINK libspdk_bdev_null.so 00:03:50.534 SO libspdk_bdev_gpt.so.5.0 00:03:50.534 LIB libspdk_bdev_ftl.a 00:03:50.534 SO libspdk_bdev_passthru.so.5.0 00:03:50.534 SO libspdk_bdev_error.so.5.0 00:03:50.534 SO libspdk_bdev_aio.so.5.0 00:03:50.534 SO libspdk_bdev_ftl.so.5.0 00:03:50.534 SYMLINK libspdk_bdev_malloc.so 00:03:50.534 LIB libspdk_bdev_zone_block.a 00:03:50.534 LIB libspdk_bdev_delay.a 00:03:50.534 SYMLINK libspdk_bdev_gpt.so 00:03:50.534 LIB libspdk_bdev_iscsi.a 00:03:50.534 SO libspdk_bdev_zone_block.so.5.0 00:03:50.534 SO libspdk_bdev_delay.so.5.0 00:03:50.534 SYMLINK libspdk_bdev_aio.so 00:03:50.534 SYMLINK libspdk_bdev_passthru.so 00:03:50.534 SYMLINK libspdk_bdev_error.so 00:03:50.534 SO libspdk_bdev_iscsi.so.5.0 00:03:50.534 SYMLINK libspdk_bdev_ftl.so 00:03:50.534 LIB libspdk_bdev_lvol.a 00:03:50.534 SYMLINK libspdk_bdev_zone_block.so 00:03:50.534 SYMLINK libspdk_bdev_delay.so 00:03:50.534 SYMLINK libspdk_bdev_iscsi.so 00:03:50.534 SO libspdk_bdev_lvol.so.5.0 00:03:50.792 SYMLINK libspdk_bdev_lvol.so 00:03:50.792 LIB libspdk_bdev_virtio.a 00:03:50.792 SO libspdk_bdev_virtio.so.5.0 00:03:50.792 SYMLINK libspdk_bdev_virtio.so 00:03:50.792 LIB libspdk_bdev_raid.a 00:03:50.792 SO libspdk_bdev_raid.so.5.0 00:03:51.049 SYMLINK libspdk_bdev_raid.so 00:03:53.005 LIB libspdk_bdev_nvme.a 00:03:53.005 SO libspdk_bdev_nvme.so.6.0 00:03:53.264 SYMLINK libspdk_bdev_nvme.so 00:03:53.522 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:53.522 CC module/event/subsystems/iobuf/iobuf.o 00:03:53.522 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:53.522 CC module/event/subsystems/scheduler/scheduler.o 00:03:53.522 CC module/event/subsystems/vmd/vmd.o 00:03:53.522 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:53.522 CC module/event/subsystems/sock/sock.o 00:03:53.522 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:53.780 LIB libspdk_event_scheduler.a 00:03:53.780 LIB libspdk_event_vmd.a 00:03:53.780 SO libspdk_event_scheduler.so.3.0 00:03:53.780 SO libspdk_event_vmd.so.5.0 00:03:53.780 LIB libspdk_event_sock.a 00:03:53.780 LIB libspdk_event_iobuf.a 00:03:53.780 LIB libspdk_event_vfu_tgt.a 00:03:53.780 SO libspdk_event_sock.so.4.0 00:03:53.780 LIB libspdk_event_vhost_blk.a 00:03:53.780 SYMLINK libspdk_event_vmd.so 00:03:53.780 SO libspdk_event_vfu_tgt.so.2.0 00:03:53.780 SO libspdk_event_iobuf.so.2.0 00:03:53.780 SYMLINK libspdk_event_scheduler.so 00:03:53.780 SO libspdk_event_vhost_blk.so.2.0 00:03:53.780 SYMLINK libspdk_event_sock.so 00:03:53.780 SYMLINK libspdk_event_vhost_blk.so 00:03:53.780 SYMLINK libspdk_event_iobuf.so 00:03:53.780 SYMLINK libspdk_event_vfu_tgt.so 00:03:54.037 CC module/event/subsystems/accel/accel.o 00:03:54.295 LIB libspdk_event_accel.a 00:03:54.553 SO libspdk_event_accel.so.5.0 00:03:54.553 SYMLINK libspdk_event_accel.so 00:03:54.811 CC module/event/subsystems/bdev/bdev.o 00:03:55.069 LIB libspdk_event_bdev.a 00:03:55.069 SO libspdk_event_bdev.so.5.0 00:03:55.328 SYMLINK libspdk_event_bdev.so 00:03:55.328 CC module/event/subsystems/ublk/ublk.o 00:03:55.328 CC module/event/subsystems/scsi/scsi.o 00:03:55.328 CC module/event/subsystems/nbd/nbd.o 00:03:55.328 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:55.328 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:55.586 LIB libspdk_event_scsi.a 00:03:55.586 LIB libspdk_event_nbd.a 00:03:55.586 SO libspdk_event_scsi.so.5.0 00:03:55.586 SO libspdk_event_nbd.so.5.0 00:03:55.586 LIB libspdk_event_ublk.a 00:03:55.586 SYMLINK libspdk_event_nbd.so 00:03:55.586 SO libspdk_event_ublk.so.2.0 00:03:55.586 SYMLINK libspdk_event_scsi.so 00:03:55.844 SYMLINK libspdk_event_ublk.so 00:03:55.844 LIB libspdk_event_nvmf.a 00:03:55.844 SO libspdk_event_nvmf.so.5.0 00:03:55.844 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:55.844 CC module/event/subsystems/iscsi/iscsi.o 00:03:55.844 SYMLINK libspdk_event_nvmf.so 00:03:56.103 LIB libspdk_event_vhost_scsi.a 00:03:56.103 SO libspdk_event_vhost_scsi.so.2.0 00:03:56.103 SYMLINK libspdk_event_vhost_scsi.so 00:03:56.103 LIB libspdk_event_iscsi.a 00:03:56.362 SO libspdk_event_iscsi.so.5.0 00:03:56.362 SYMLINK libspdk_event_iscsi.so 00:03:56.362 SO libspdk.so.5.0 00:03:56.362 SYMLINK libspdk.so 00:03:56.634 CC app/trace_record/trace_record.o 00:03:56.634 CC app/spdk_lspci/spdk_lspci.o 00:03:56.634 TEST_HEADER include/spdk/accel.h 00:03:56.634 CC app/spdk_nvme_identify/identify.o 00:03:56.634 CC test/rpc_client/rpc_client_test.o 00:03:56.634 CC app/spdk_nvme_perf/perf.o 00:03:56.634 CXX app/trace/trace.o 00:03:56.634 TEST_HEADER include/spdk/accel_module.h 00:03:56.634 CC app/spdk_nvme_discover/discovery_aer.o 00:03:56.634 TEST_HEADER include/spdk/assert.h 00:03:56.634 TEST_HEADER include/spdk/barrier.h 00:03:56.634 CC app/spdk_top/spdk_top.o 00:03:56.634 TEST_HEADER include/spdk/base64.h 00:03:56.634 TEST_HEADER include/spdk/bdev.h 00:03:56.634 TEST_HEADER include/spdk/bdev_module.h 00:03:56.634 TEST_HEADER include/spdk/bdev_zone.h 00:03:56.634 TEST_HEADER include/spdk/bit_array.h 00:03:56.634 TEST_HEADER include/spdk/bit_pool.h 00:03:56.634 TEST_HEADER include/spdk/blob_bdev.h 00:03:56.634 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:56.634 TEST_HEADER include/spdk/blobfs.h 00:03:56.634 TEST_HEADER include/spdk/blob.h 00:03:56.634 TEST_HEADER include/spdk/conf.h 00:03:56.634 TEST_HEADER include/spdk/config.h 00:03:56.634 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:56.634 TEST_HEADER include/spdk/cpuset.h 00:03:56.634 TEST_HEADER include/spdk/crc16.h 00:03:56.634 TEST_HEADER include/spdk/crc32.h 00:03:56.634 TEST_HEADER include/spdk/crc64.h 00:03:56.634 CC app/spdk_dd/spdk_dd.o 00:03:56.634 TEST_HEADER include/spdk/dif.h 00:03:56.634 TEST_HEADER include/spdk/dma.h 00:03:56.634 TEST_HEADER include/spdk/endian.h 00:03:56.634 CC examples/util/zipf/zipf.o 00:03:56.634 CC examples/ioat/verify/verify.o 00:03:56.634 TEST_HEADER include/spdk/env_dpdk.h 00:03:56.634 CC examples/ioat/perf/perf.o 00:03:56.634 CC test/event/event_perf/event_perf.o 00:03:56.634 CC app/nvmf_tgt/nvmf_main.o 00:03:56.634 CC examples/nvme/reconnect/reconnect.o 00:03:56.634 CC test/app/jsoncat/jsoncat.o 00:03:56.634 CC examples/accel/perf/accel_perf.o 00:03:56.634 CC examples/nvme/arbitration/arbitration.o 00:03:56.634 TEST_HEADER include/spdk/env.h 00:03:56.634 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:56.634 CC test/nvme/aer/aer.o 00:03:56.634 CC test/event/reactor/reactor.o 00:03:56.634 CC examples/vmd/lsvmd/lsvmd.o 00:03:56.634 CC test/app/histogram_perf/histogram_perf.o 00:03:56.634 TEST_HEADER include/spdk/event.h 00:03:56.634 CC examples/sock/hello_world/hello_sock.o 00:03:56.634 TEST_HEADER include/spdk/fd_group.h 00:03:56.634 CC app/iscsi_tgt/iscsi_tgt.o 00:03:56.634 CC examples/idxd/perf/perf.o 00:03:56.634 TEST_HEADER include/spdk/fd.h 00:03:56.634 CC app/fio/nvme/fio_plugin.o 00:03:56.634 CC test/app/stub/stub.o 00:03:56.634 CC app/vhost/vhost.o 00:03:56.634 TEST_HEADER include/spdk/file.h 00:03:56.634 CC test/thread/poller_perf/poller_perf.o 00:03:56.634 TEST_HEADER include/spdk/ftl.h 00:03:56.634 CC examples/nvme/hello_world/hello_world.o 00:03:56.634 TEST_HEADER include/spdk/gpt_spec.h 00:03:56.634 TEST_HEADER include/spdk/hexlify.h 00:03:56.634 TEST_HEADER include/spdk/histogram_data.h 00:03:56.634 CC app/spdk_tgt/spdk_tgt.o 00:03:56.634 TEST_HEADER include/spdk/idxd.h 00:03:56.634 TEST_HEADER include/spdk/idxd_spec.h 00:03:56.899 TEST_HEADER include/spdk/init.h 00:03:56.899 TEST_HEADER include/spdk/ioat.h 00:03:56.899 TEST_HEADER include/spdk/ioat_spec.h 00:03:56.899 TEST_HEADER include/spdk/iscsi_spec.h 00:03:56.899 TEST_HEADER include/spdk/json.h 00:03:56.899 TEST_HEADER include/spdk/jsonrpc.h 00:03:56.899 TEST_HEADER include/spdk/likely.h 00:03:56.899 TEST_HEADER include/spdk/log.h 00:03:56.899 CC examples/blob/cli/blobcli.o 00:03:56.899 TEST_HEADER include/spdk/lvol.h 00:03:56.899 CC test/bdev/bdevio/bdevio.o 00:03:56.899 CC examples/blob/hello_world/hello_blob.o 00:03:56.899 CC examples/thread/thread/thread_ex.o 00:03:56.899 TEST_HEADER include/spdk/memory.h 00:03:56.899 CC test/blobfs/mkfs/mkfs.o 00:03:56.899 CC test/accel/dif/dif.o 00:03:56.899 TEST_HEADER include/spdk/mmio.h 00:03:56.899 CC examples/bdev/hello_world/hello_bdev.o 00:03:56.899 CC test/dma/test_dma/test_dma.o 00:03:56.899 CC examples/bdev/bdevperf/bdevperf.o 00:03:56.899 TEST_HEADER include/spdk/nbd.h 00:03:56.899 TEST_HEADER include/spdk/notify.h 00:03:56.899 CC examples/nvmf/nvmf/nvmf.o 00:03:56.899 CC test/app/bdev_svc/bdev_svc.o 00:03:56.899 TEST_HEADER include/spdk/nvme.h 00:03:56.899 TEST_HEADER include/spdk/nvme_intel.h 00:03:56.899 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:56.899 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:56.899 CC test/env/mem_callbacks/mem_callbacks.o 00:03:56.899 TEST_HEADER include/spdk/nvme_spec.h 00:03:56.899 TEST_HEADER include/spdk/nvme_zns.h 00:03:56.899 CC test/lvol/esnap/esnap.o 00:03:56.899 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:56.899 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:56.899 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:56.899 TEST_HEADER include/spdk/nvmf.h 00:03:56.899 TEST_HEADER include/spdk/nvmf_spec.h 00:03:56.899 TEST_HEADER include/spdk/nvmf_transport.h 00:03:56.899 TEST_HEADER include/spdk/opal.h 00:03:56.899 TEST_HEADER include/spdk/opal_spec.h 00:03:56.900 TEST_HEADER include/spdk/pci_ids.h 00:03:56.900 TEST_HEADER include/spdk/pipe.h 00:03:56.900 TEST_HEADER include/spdk/queue.h 00:03:56.900 TEST_HEADER include/spdk/reduce.h 00:03:56.900 TEST_HEADER include/spdk/rpc.h 00:03:56.900 TEST_HEADER include/spdk/scheduler.h 00:03:56.900 TEST_HEADER include/spdk/scsi.h 00:03:56.900 TEST_HEADER include/spdk/scsi_spec.h 00:03:56.900 TEST_HEADER include/spdk/sock.h 00:03:56.900 TEST_HEADER include/spdk/stdinc.h 00:03:56.900 TEST_HEADER include/spdk/string.h 00:03:56.900 TEST_HEADER include/spdk/thread.h 00:03:56.900 TEST_HEADER include/spdk/trace.h 00:03:56.900 LINK spdk_lspci 00:03:56.900 TEST_HEADER include/spdk/trace_parser.h 00:03:56.900 TEST_HEADER include/spdk/tree.h 00:03:56.900 TEST_HEADER include/spdk/ublk.h 00:03:56.900 TEST_HEADER include/spdk/util.h 00:03:56.900 TEST_HEADER include/spdk/uuid.h 00:03:56.900 TEST_HEADER include/spdk/version.h 00:03:56.900 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:56.900 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:56.900 TEST_HEADER include/spdk/vhost.h 00:03:56.900 TEST_HEADER include/spdk/vmd.h 00:03:56.900 TEST_HEADER include/spdk/xor.h 00:03:56.900 TEST_HEADER include/spdk/zipf.h 00:03:56.900 CXX test/cpp_headers/accel.o 00:03:56.900 LINK lsvmd 00:03:57.164 LINK rpc_client_test 00:03:57.164 LINK spdk_nvme_discover 00:03:57.164 LINK reactor 00:03:57.164 LINK zipf 00:03:57.164 LINK event_perf 00:03:57.164 LINK poller_perf 00:03:57.164 LINK jsoncat 00:03:57.164 LINK histogram_perf 00:03:57.164 LINK nvmf_tgt 00:03:57.164 LINK vhost 00:03:57.164 LINK interrupt_tgt 00:03:57.164 LINK stub 00:03:57.164 LINK spdk_trace_record 00:03:57.164 LINK verify 00:03:57.164 LINK ioat_perf 00:03:57.164 LINK spdk_tgt 00:03:57.164 LINK bdev_svc 00:03:57.164 LINK mkfs 00:03:57.164 LINK iscsi_tgt 00:03:57.164 LINK hello_sock 00:03:57.164 LINK hello_world 00:03:57.164 LINK hello_bdev 00:03:57.164 LINK hello_blob 00:03:57.164 LINK thread 00:03:57.431 LINK aer 00:03:57.431 CXX test/cpp_headers/accel_module.o 00:03:57.431 LINK spdk_dd 00:03:57.431 LINK arbitration 00:03:57.431 CC test/env/vtophys/vtophys.o 00:03:57.431 LINK reconnect 00:03:57.431 CXX test/cpp_headers/assert.o 00:03:57.431 LINK idxd_perf 00:03:57.431 LINK nvmf 00:03:57.431 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:57.431 CXX test/cpp_headers/barrier.o 00:03:57.431 CC test/event/reactor_perf/reactor_perf.o 00:03:57.431 CC examples/nvme/hotplug/hotplug.o 00:03:57.431 LINK spdk_trace 00:03:57.431 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:57.431 CC examples/vmd/led/led.o 00:03:57.691 CC test/nvme/reset/reset.o 00:03:57.691 LINK bdevio 00:03:57.691 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:57.691 LINK dif 00:03:57.691 LINK test_dma 00:03:57.691 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:57.691 CC app/fio/bdev/fio_plugin.o 00:03:57.691 LINK accel_perf 00:03:57.691 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:57.691 CC examples/nvme/abort/abort.o 00:03:57.691 LINK nvme_fuzz 00:03:57.691 CC test/nvme/sgl/sgl.o 00:03:57.691 CC test/nvme/e2edp/nvme_dp.o 00:03:57.691 CC test/event/app_repeat/app_repeat.o 00:03:57.691 LINK vtophys 00:03:57.691 CXX test/cpp_headers/base64.o 00:03:57.691 CC test/env/memory/memory_ut.o 00:03:57.691 LINK nvme_manage 00:03:57.691 CXX test/cpp_headers/bdev.o 00:03:57.691 CXX test/cpp_headers/bdev_module.o 00:03:57.691 CXX test/cpp_headers/bdev_zone.o 00:03:57.957 CC test/nvme/overhead/overhead.o 00:03:57.957 CXX test/cpp_headers/bit_array.o 00:03:57.957 LINK led 00:03:57.957 CXX test/cpp_headers/bit_pool.o 00:03:57.957 LINK reactor_perf 00:03:57.957 CC test/event/scheduler/scheduler.o 00:03:57.957 LINK spdk_nvme 00:03:57.957 CC test/env/pci/pci_ut.o 00:03:57.957 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:57.957 LINK blobcli 00:03:57.957 CC test/nvme/err_injection/err_injection.o 00:03:57.957 CC test/nvme/startup/startup.o 00:03:57.957 CXX test/cpp_headers/blob_bdev.o 00:03:57.957 CC test/nvme/reserve/reserve.o 00:03:57.957 CXX test/cpp_headers/blobfs_bdev.o 00:03:57.957 CC test/nvme/simple_copy/simple_copy.o 00:03:57.957 LINK cmb_copy 00:03:57.957 CC test/nvme/connect_stress/connect_stress.o 00:03:57.957 CC test/nvme/compliance/nvme_compliance.o 00:03:57.957 LINK env_dpdk_post_init 00:03:57.957 CC test/nvme/boot_partition/boot_partition.o 00:03:57.957 LINK hotplug 00:03:57.957 CXX test/cpp_headers/blobfs.o 00:03:57.957 CXX test/cpp_headers/blob.o 00:03:57.957 LINK app_repeat 00:03:57.957 CXX test/cpp_headers/conf.o 00:03:58.223 LINK reset 00:03:58.223 CXX test/cpp_headers/config.o 00:03:58.223 CC test/nvme/fused_ordering/fused_ordering.o 00:03:58.223 CXX test/cpp_headers/cpuset.o 00:03:58.223 CXX test/cpp_headers/crc16.o 00:03:58.223 CXX test/cpp_headers/crc32.o 00:03:58.223 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:58.223 CXX test/cpp_headers/crc64.o 00:03:58.223 LINK mem_callbacks 00:03:58.223 CXX test/cpp_headers/dif.o 00:03:58.223 CXX test/cpp_headers/dma.o 00:03:58.223 CC test/nvme/fdp/fdp.o 00:03:58.223 CXX test/cpp_headers/endian.o 00:03:58.223 LINK pmr_persistence 00:03:58.223 CC test/nvme/cuse/cuse.o 00:03:58.223 LINK sgl 00:03:58.223 CXX test/cpp_headers/env_dpdk.o 00:03:58.223 LINK startup 00:03:58.223 CXX test/cpp_headers/env.o 00:03:58.223 LINK spdk_nvme_perf 00:03:58.223 CXX test/cpp_headers/event.o 00:03:58.223 LINK bdevperf 00:03:58.223 LINK nvme_dp 00:03:58.223 LINK spdk_nvme_identify 00:03:58.223 LINK err_injection 00:03:58.223 CXX test/cpp_headers/fd_group.o 00:03:58.490 LINK scheduler 00:03:58.490 CXX test/cpp_headers/fd.o 00:03:58.490 CXX test/cpp_headers/file.o 00:03:58.490 LINK connect_stress 00:03:58.490 LINK abort 00:03:58.490 CXX test/cpp_headers/ftl.o 00:03:58.490 LINK reserve 00:03:58.490 CXX test/cpp_headers/gpt_spec.o 00:03:58.490 CXX test/cpp_headers/hexlify.o 00:03:58.490 LINK boot_partition 00:03:58.490 CXX test/cpp_headers/histogram_data.o 00:03:58.490 LINK overhead 00:03:58.490 LINK vhost_fuzz 00:03:58.490 LINK simple_copy 00:03:58.490 CXX test/cpp_headers/idxd.o 00:03:58.490 CXX test/cpp_headers/idxd_spec.o 00:03:58.490 CXX test/cpp_headers/init.o 00:03:58.490 CXX test/cpp_headers/ioat.o 00:03:58.490 LINK spdk_top 00:03:58.490 CXX test/cpp_headers/ioat_spec.o 00:03:58.490 CXX test/cpp_headers/iscsi_spec.o 00:03:58.490 LINK spdk_bdev 00:03:58.490 CXX test/cpp_headers/json.o 00:03:58.490 LINK doorbell_aers 00:03:58.490 LINK fused_ordering 00:03:58.490 CXX test/cpp_headers/jsonrpc.o 00:03:58.490 CXX test/cpp_headers/likely.o 00:03:58.490 CXX test/cpp_headers/log.o 00:03:58.490 CXX test/cpp_headers/lvol.o 00:03:58.490 CXX test/cpp_headers/memory.o 00:03:58.750 LINK pci_ut 00:03:58.750 CXX test/cpp_headers/mmio.o 00:03:58.750 CXX test/cpp_headers/nbd.o 00:03:58.750 CXX test/cpp_headers/notify.o 00:03:58.750 LINK nvme_compliance 00:03:58.750 CXX test/cpp_headers/nvme.o 00:03:58.750 CXX test/cpp_headers/nvme_intel.o 00:03:58.750 CXX test/cpp_headers/nvme_ocssd.o 00:03:58.750 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:58.750 CXX test/cpp_headers/nvme_spec.o 00:03:58.750 CXX test/cpp_headers/nvme_zns.o 00:03:58.750 CXX test/cpp_headers/nvmf_cmd.o 00:03:58.750 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:58.750 CXX test/cpp_headers/nvmf.o 00:03:58.750 CXX test/cpp_headers/nvmf_transport.o 00:03:58.750 CXX test/cpp_headers/nvmf_spec.o 00:03:58.750 CXX test/cpp_headers/opal.o 00:03:58.750 CXX test/cpp_headers/opal_spec.o 00:03:58.750 CXX test/cpp_headers/pci_ids.o 00:03:58.750 CXX test/cpp_headers/pipe.o 00:03:58.750 CXX test/cpp_headers/queue.o 00:03:58.750 CXX test/cpp_headers/reduce.o 00:03:58.750 CXX test/cpp_headers/rpc.o 00:03:58.750 CXX test/cpp_headers/scheduler.o 00:03:58.750 CXX test/cpp_headers/scsi.o 00:03:58.750 CXX test/cpp_headers/scsi_spec.o 00:03:58.750 CXX test/cpp_headers/sock.o 00:03:58.750 CXX test/cpp_headers/stdinc.o 00:03:58.750 CXX test/cpp_headers/string.o 00:03:58.750 CXX test/cpp_headers/thread.o 00:03:58.750 CXX test/cpp_headers/trace.o 00:03:58.750 CXX test/cpp_headers/trace_parser.o 00:03:58.750 CXX test/cpp_headers/tree.o 00:03:59.011 CXX test/cpp_headers/ublk.o 00:03:59.011 CXX test/cpp_headers/util.o 00:03:59.011 LINK fdp 00:03:59.011 CXX test/cpp_headers/uuid.o 00:03:59.011 CXX test/cpp_headers/version.o 00:03:59.011 CXX test/cpp_headers/vfio_user_pci.o 00:03:59.011 CXX test/cpp_headers/vfio_user_spec.o 00:03:59.011 CXX test/cpp_headers/vhost.o 00:03:59.011 CXX test/cpp_headers/vmd.o 00:03:59.011 CXX test/cpp_headers/xor.o 00:03:59.011 CXX test/cpp_headers/zipf.o 00:03:59.577 LINK memory_ut 00:03:59.836 LINK cuse 00:04:00.771 LINK iscsi_fuzz 00:04:07.340 LINK esnap 00:04:07.600 00:04:07.600 real 0m56.608s 00:04:07.600 user 8m14.333s 00:04:07.600 sys 1m48.192s 00:04:07.600 23:17:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:07.600 23:17:28 -- common/autotest_common.sh@10 -- $ set +x 00:04:07.600 ************************************ 00:04:07.600 END TEST make 00:04:07.600 ************************************ 00:04:07.600 23:17:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:07.600 23:17:28 -- nvmf/common.sh@7 -- # uname -s 00:04:07.600 23:17:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.600 23:17:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.600 23:17:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.600 23:17:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.600 23:17:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.600 23:17:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.600 23:17:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.600 23:17:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.600 23:17:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.600 23:17:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.600 23:17:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:04:07.600 23:17:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:04:07.600 23:17:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.600 23:17:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.600 23:17:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:07.600 23:17:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:07.600 23:17:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.600 23:17:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.600 23:17:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.600 23:17:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.600 23:17:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.600 23:17:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.600 23:17:28 -- paths/export.sh@5 -- # export PATH 00:04:07.600 23:17:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.600 23:17:28 -- nvmf/common.sh@46 -- # : 0 00:04:07.600 23:17:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:07.600 23:17:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:07.600 23:17:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:07.600 23:17:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.600 23:17:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.600 23:17:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:07.600 23:17:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:07.600 23:17:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:07.600 23:17:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:07.600 23:17:28 -- spdk/autotest.sh@32 -- # uname -s 00:04:07.600 23:17:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:07.600 23:17:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:07.600 23:17:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:07.600 23:17:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:07.600 23:17:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:07.600 23:17:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:07.600 23:17:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:07.600 23:17:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:07.600 23:17:28 -- spdk/autotest.sh@48 -- # udevadm_pid=92293 00:04:07.600 23:17:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:07.600 23:17:28 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:07.600 23:17:28 -- spdk/autotest.sh@54 -- # echo 92295 00:04:07.600 23:17:28 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:07.600 23:17:28 -- spdk/autotest.sh@56 -- # echo 92296 00:04:07.600 23:17:28 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:07.600 23:17:28 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:04:07.600 23:17:28 -- spdk/autotest.sh@60 -- # echo 92297 00:04:07.600 23:17:28 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:04:07.600 23:17:28 -- spdk/autotest.sh@62 -- # echo 92298 00:04:07.600 23:17:28 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:04:07.600 23:17:28 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:07.600 23:17:28 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:07.600 23:17:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:07.600 23:17:28 -- common/autotest_common.sh@10 -- # set +x 00:04:07.600 23:17:28 -- spdk/autotest.sh@70 -- # create_test_list 00:04:07.600 23:17:28 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:07.600 23:17:28 -- common/autotest_common.sh@10 -- # set +x 00:04:07.600 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:04:07.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:04:07.860 23:17:28 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:07.860 23:17:28 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:07.860 23:17:28 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:07.860 23:17:28 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:07.860 23:17:28 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:07.860 23:17:28 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:07.860 23:17:28 -- common/autotest_common.sh@1440 -- # uname 00:04:07.860 23:17:28 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:07.860 23:17:28 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:07.860 23:17:28 -- common/autotest_common.sh@1460 -- # uname 00:04:07.860 23:17:28 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:07.860 23:17:28 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:07.860 23:17:28 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:07.860 23:17:28 -- spdk/autotest.sh@83 -- # hash lcov 00:04:07.860 23:17:28 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:07.860 23:17:28 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:07.860 --rc lcov_branch_coverage=1 00:04:07.860 --rc lcov_function_coverage=1 00:04:07.860 --rc genhtml_branch_coverage=1 00:04:07.860 --rc genhtml_function_coverage=1 00:04:07.860 --rc genhtml_legend=1 00:04:07.860 --rc geninfo_all_blocks=1 00:04:07.860 ' 00:04:07.860 23:17:28 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:07.860 --rc lcov_branch_coverage=1 00:04:07.860 --rc lcov_function_coverage=1 00:04:07.860 --rc genhtml_branch_coverage=1 00:04:07.860 --rc genhtml_function_coverage=1 00:04:07.860 --rc genhtml_legend=1 00:04:07.860 --rc geninfo_all_blocks=1 00:04:07.860 ' 00:04:07.860 23:17:28 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:07.860 --rc lcov_branch_coverage=1 00:04:07.860 --rc lcov_function_coverage=1 00:04:07.860 --rc genhtml_branch_coverage=1 00:04:07.860 --rc genhtml_function_coverage=1 00:04:07.860 --rc genhtml_legend=1 00:04:07.860 --rc geninfo_all_blocks=1 00:04:07.860 --no-external' 00:04:07.860 23:17:28 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:07.860 --rc lcov_branch_coverage=1 00:04:07.860 --rc lcov_function_coverage=1 00:04:07.860 --rc genhtml_branch_coverage=1 00:04:07.860 --rc genhtml_function_coverage=1 00:04:07.860 --rc genhtml_legend=1 00:04:07.860 --rc geninfo_all_blocks=1 00:04:07.860 --no-external' 00:04:07.860 23:17:28 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:07.860 lcov: LCOV version 1.14 00:04:07.860 23:17:28 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:12.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:12.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:12.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:12.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:12.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:12.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:12.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:12.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:12.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:12.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:12.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:12.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:12.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:12.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:12.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:12.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:12.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:12.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:12.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:12.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:12.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:12.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:12.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:12.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:12.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:12.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:12.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:12.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:12.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:12.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:12.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:50.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:50.824 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:50.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:50.824 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:50.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:50.824 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:29.552 23:18:43 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:05:29.552 23:18:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:29.552 23:18:43 -- common/autotest_common.sh@10 -- # set +x 00:05:29.552 23:18:43 -- spdk/autotest.sh@102 -- # rm -f 00:05:29.552 23:18:43 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.552 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:05:29.552 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:29.552 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:29.552 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:29.552 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:29.552 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:29.552 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:29.552 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:29.552 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:29.552 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:29.552 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:29.552 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:29.552 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:29.552 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:29.552 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:29.552 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:29.552 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:29.552 23:18:45 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:05:29.552 23:18:45 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:29.552 23:18:45 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:29.552 23:18:45 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:29.552 23:18:45 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:29.552 23:18:45 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:29.552 23:18:45 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:29.552 23:18:45 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:29.552 23:18:45 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:29.552 23:18:45 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:05:29.552 23:18:45 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:05:29.552 23:18:45 -- spdk/autotest.sh@121 -- # grep -v p 00:05:29.552 23:18:45 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:29.552 23:18:45 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:29.552 23:18:45 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:05:29.552 23:18:45 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:29.552 23:18:45 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:29.552 No valid GPT data, bailing 00:05:29.552 23:18:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:29.552 23:18:45 -- scripts/common.sh@393 -- # pt= 00:05:29.552 23:18:45 -- scripts/common.sh@394 -- # return 1 00:05:29.552 23:18:45 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:29.552 1+0 records in 00:05:29.552 1+0 records out 00:05:29.552 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00394647 s, 266 MB/s 00:05:29.552 23:18:45 -- spdk/autotest.sh@129 -- # sync 00:05:29.552 23:18:45 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:29.552 23:18:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:29.552 23:18:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:29.552 23:18:48 -- spdk/autotest.sh@135 -- # uname -s 00:05:29.552 23:18:48 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:05:29.552 23:18:48 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:29.552 23:18:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.552 23:18:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.552 23:18:48 -- common/autotest_common.sh@10 -- # set +x 00:05:29.552 ************************************ 00:05:29.552 START TEST setup.sh 00:05:29.552 ************************************ 00:05:29.552 23:18:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:29.552 * Looking for test storage... 00:05:29.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:29.552 23:18:48 -- setup/test-setup.sh@10 -- # uname -s 00:05:29.552 23:18:48 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:29.552 23:18:48 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:29.552 23:18:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.552 23:18:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.552 23:18:48 -- common/autotest_common.sh@10 -- # set +x 00:05:29.552 ************************************ 00:05:29.552 START TEST acl 00:05:29.552 ************************************ 00:05:29.552 23:18:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:29.552 * Looking for test storage... 00:05:29.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:29.552 23:18:48 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:29.552 23:18:48 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:29.552 23:18:48 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:29.552 23:18:48 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:29.552 23:18:48 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:29.552 23:18:48 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:29.552 23:18:48 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:29.552 23:18:48 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:29.552 23:18:48 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:29.552 23:18:48 -- setup/acl.sh@12 -- # devs=() 00:05:29.552 23:18:48 -- setup/acl.sh@12 -- # declare -a devs 00:05:29.552 23:18:48 -- setup/acl.sh@13 -- # drivers=() 00:05:29.552 23:18:48 -- setup/acl.sh@13 -- # declare -A drivers 00:05:29.552 23:18:48 -- setup/acl.sh@51 -- # setup reset 00:05:29.552 23:18:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.552 23:18:48 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.813 23:18:50 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:29.813 23:18:50 -- setup/acl.sh@16 -- # local dev driver 00:05:29.813 23:18:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:29.813 23:18:50 -- setup/acl.sh@15 -- # setup output status 00:05:29.813 23:18:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.813 23:18:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:31.723 Hugepages 00:05:31.723 node hugesize free / total 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 00:05:31.723 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # continue 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:31.723 23:18:52 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:05:31.723 23:18:52 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:31.723 23:18:52 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:31.723 23:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.723 23:18:52 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:31.723 23:18:52 -- setup/acl.sh@54 -- # run_test denied denied 00:05:31.723 23:18:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.723 23:18:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.723 23:18:52 -- common/autotest_common.sh@10 -- # set +x 00:05:31.723 ************************************ 00:05:31.723 START TEST denied 00:05:31.724 ************************************ 00:05:31.724 23:18:52 -- common/autotest_common.sh@1104 -- # denied 00:05:31.724 23:18:52 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:05:31.724 23:18:52 -- setup/acl.sh@38 -- # setup output config 00:05:31.724 23:18:52 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:05:31.724 23:18:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.724 23:18:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:33.633 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:05:33.633 23:18:54 -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:05:33.633 23:18:54 -- setup/acl.sh@28 -- # local dev driver 00:05:33.633 23:18:54 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:33.633 23:18:54 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:05:33.633 23:18:54 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:05:33.633 23:18:54 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:33.633 23:18:54 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:33.633 23:18:54 -- setup/acl.sh@41 -- # setup reset 00:05:33.633 23:18:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:33.633 23:18:54 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:36.930 00:05:36.930 real 0m4.881s 00:05:36.930 user 0m1.500s 00:05:36.930 sys 0m2.517s 00:05:36.930 23:18:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.930 23:18:57 -- common/autotest_common.sh@10 -- # set +x 00:05:36.930 ************************************ 00:05:36.930 END TEST denied 00:05:36.930 ************************************ 00:05:36.930 23:18:57 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:36.930 23:18:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.930 23:18:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.930 23:18:57 -- common/autotest_common.sh@10 -- # set +x 00:05:36.930 ************************************ 00:05:36.930 START TEST allowed 00:05:36.930 ************************************ 00:05:36.930 23:18:57 -- common/autotest_common.sh@1104 -- # allowed 00:05:36.930 23:18:57 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:05:36.930 23:18:57 -- setup/acl.sh@45 -- # setup output config 00:05:36.930 23:18:57 -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:05:36.930 23:18:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.930 23:18:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:39.516 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:39.516 23:19:00 -- setup/acl.sh@47 -- # verify 00:05:39.516 23:19:00 -- setup/acl.sh@28 -- # local dev driver 00:05:39.516 23:19:00 -- setup/acl.sh@48 -- # setup reset 00:05:39.516 23:19:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:39.516 23:19:00 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:41.423 00:05:41.423 real 0m4.807s 00:05:41.423 user 0m1.304s 00:05:41.423 sys 0m2.394s 00:05:41.423 23:19:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.423 23:19:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.423 ************************************ 00:05:41.423 END TEST allowed 00:05:41.423 ************************************ 00:05:41.423 00:05:41.423 real 0m13.516s 00:05:41.423 user 0m4.260s 00:05:41.423 sys 0m7.417s 00:05:41.423 23:19:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.423 23:19:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.423 ************************************ 00:05:41.423 END TEST acl 00:05:41.423 ************************************ 00:05:41.423 23:19:02 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:41.423 23:19:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.423 23:19:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.423 23:19:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.423 ************************************ 00:05:41.423 START TEST hugepages 00:05:41.423 ************************************ 00:05:41.423 23:19:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:41.423 * Looking for test storage... 00:05:41.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:41.423 23:19:02 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:41.423 23:19:02 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:41.423 23:19:02 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:41.423 23:19:02 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:41.423 23:19:02 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:41.423 23:19:02 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:41.423 23:19:02 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:41.423 23:19:02 -- setup/common.sh@18 -- # local node= 00:05:41.423 23:19:02 -- setup/common.sh@19 -- # local var val 00:05:41.423 23:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.423 23:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.423 23:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.423 23:19:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.423 23:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.423 23:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.423 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 23:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 23038688 kB' 'MemAvailable: 28061588 kB' 'Buffers: 2704 kB' 'Cached: 13960684 kB' 'SwapCached: 0 kB' 'Active: 9828092 kB' 'Inactive: 4662844 kB' 'Active(anon): 9427076 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531344 kB' 'Mapped: 202540 kB' 'Shmem: 8899528 kB' 'KReclaimable: 505380 kB' 'Slab: 901172 kB' 'SReclaimable: 505380 kB' 'SUnreclaim: 395792 kB' 'KernelStack: 12416 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304788 kB' 'Committed_AS: 10561572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196696 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:41.423 23:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.423 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.423 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.424 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.424 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # continue 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.425 23:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.425 23:19:02 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.425 23:19:02 -- setup/common.sh@33 -- # echo 2048 00:05:41.425 23:19:02 -- setup/common.sh@33 -- # return 0 00:05:41.425 23:19:02 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:41.425 23:19:02 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:41.425 23:19:02 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:41.425 23:19:02 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:41.425 23:19:02 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:41.425 23:19:02 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:41.425 23:19:02 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:41.425 23:19:02 -- setup/hugepages.sh@207 -- # get_nodes 00:05:41.425 23:19:02 -- setup/hugepages.sh@27 -- # local node 00:05:41.425 23:19:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.425 23:19:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:41.425 23:19:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.425 23:19:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:41.425 23:19:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:41.425 23:19:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.425 23:19:02 -- setup/hugepages.sh@208 -- # clear_hp 00:05:41.425 23:19:02 -- setup/hugepages.sh@37 -- # local node hp 00:05:41.425 23:19:02 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:41.425 23:19:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:41.425 23:19:02 -- setup/hugepages.sh@41 -- # echo 0 00:05:41.425 23:19:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:41.425 23:19:02 -- setup/hugepages.sh@41 -- # echo 0 00:05:41.425 23:19:02 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:41.425 23:19:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:41.425 23:19:02 -- setup/hugepages.sh@41 -- # echo 0 00:05:41.425 23:19:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:41.425 23:19:02 -- setup/hugepages.sh@41 -- # echo 0 00:05:41.425 23:19:02 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:41.425 23:19:02 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:41.425 23:19:02 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:41.425 23:19:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.425 23:19:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.425 23:19:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.425 ************************************ 00:05:41.425 START TEST default_setup 00:05:41.425 ************************************ 00:05:41.425 23:19:02 -- common/autotest_common.sh@1104 -- # default_setup 00:05:41.425 23:19:02 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:41.425 23:19:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:41.425 23:19:02 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:41.425 23:19:02 -- setup/hugepages.sh@51 -- # shift 00:05:41.425 23:19:02 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:41.425 23:19:02 -- setup/hugepages.sh@52 -- # local node_ids 00:05:41.425 23:19:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:41.425 23:19:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:41.425 23:19:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:41.425 23:19:02 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:41.425 23:19:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:41.425 23:19:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:41.425 23:19:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:41.425 23:19:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:41.425 23:19:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:41.425 23:19:02 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:41.425 23:19:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:41.425 23:19:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:41.425 23:19:02 -- setup/hugepages.sh@73 -- # return 0 00:05:41.425 23:19:02 -- setup/hugepages.sh@137 -- # setup output 00:05:41.425 23:19:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.425 23:19:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:43.334 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:43.334 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:43.334 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:43.334 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:43.334 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:43.334 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:43.334 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:43.334 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:43.334 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:43.334 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:43.334 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:43.334 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:43.334 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:43.334 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:43.334 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:43.334 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:44.273 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:44.273 23:19:05 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:44.273 23:19:05 -- setup/hugepages.sh@89 -- # local node 00:05:44.273 23:19:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:44.273 23:19:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:44.273 23:19:05 -- setup/hugepages.sh@92 -- # local surp 00:05:44.273 23:19:05 -- setup/hugepages.sh@93 -- # local resv 00:05:44.273 23:19:05 -- setup/hugepages.sh@94 -- # local anon 00:05:44.273 23:19:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:44.273 23:19:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:44.273 23:19:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:44.273 23:19:05 -- setup/common.sh@18 -- # local node= 00:05:44.273 23:19:05 -- setup/common.sh@19 -- # local var val 00:05:44.273 23:19:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.273 23:19:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.273 23:19:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.273 23:19:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.273 23:19:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.273 23:19:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.273 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.273 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.273 23:19:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25128100 kB' 'MemAvailable: 30150912 kB' 'Buffers: 2704 kB' 'Cached: 13960780 kB' 'SwapCached: 0 kB' 'Active: 9844984 kB' 'Inactive: 4662844 kB' 'Active(anon): 9443968 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547592 kB' 'Mapped: 202676 kB' 'Shmem: 8899624 kB' 'KReclaimable: 505292 kB' 'Slab: 900680 kB' 'SReclaimable: 505292 kB' 'SUnreclaim: 395388 kB' 'KernelStack: 12512 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10580260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196872 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:44.273 23:19:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.273 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.273 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.273 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.273 23:19:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.273 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.274 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.274 23:19:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.274 23:19:05 -- setup/common.sh@33 -- # echo 0 00:05:44.274 23:19:05 -- setup/common.sh@33 -- # return 0 00:05:44.275 23:19:05 -- setup/hugepages.sh@97 -- # anon=0 00:05:44.275 23:19:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:44.275 23:19:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.275 23:19:05 -- setup/common.sh@18 -- # local node= 00:05:44.275 23:19:05 -- setup/common.sh@19 -- # local var val 00:05:44.275 23:19:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.275 23:19:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.275 23:19:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.275 23:19:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.275 23:19:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.275 23:19:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25135784 kB' 'MemAvailable: 30158596 kB' 'Buffers: 2704 kB' 'Cached: 13960784 kB' 'SwapCached: 0 kB' 'Active: 9844556 kB' 'Inactive: 4662844 kB' 'Active(anon): 9443540 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547192 kB' 'Mapped: 202700 kB' 'Shmem: 8899628 kB' 'KReclaimable: 505292 kB' 'Slab: 900768 kB' 'SReclaimable: 505292 kB' 'SUnreclaim: 395476 kB' 'KernelStack: 12336 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10576084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.275 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.275 23:19:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.276 23:19:05 -- setup/common.sh@33 -- # echo 0 00:05:44.276 23:19:05 -- setup/common.sh@33 -- # return 0 00:05:44.276 23:19:05 -- setup/hugepages.sh@99 -- # surp=0 00:05:44.276 23:19:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:44.276 23:19:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:44.276 23:19:05 -- setup/common.sh@18 -- # local node= 00:05:44.276 23:19:05 -- setup/common.sh@19 -- # local var val 00:05:44.276 23:19:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.276 23:19:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.276 23:19:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.276 23:19:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.276 23:19:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.276 23:19:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25135476 kB' 'MemAvailable: 30158288 kB' 'Buffers: 2704 kB' 'Cached: 13960796 kB' 'SwapCached: 0 kB' 'Active: 9844244 kB' 'Inactive: 4662844 kB' 'Active(anon): 9443228 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546884 kB' 'Mapped: 202624 kB' 'Shmem: 8899640 kB' 'KReclaimable: 505292 kB' 'Slab: 900752 kB' 'SReclaimable: 505292 kB' 'SUnreclaim: 395460 kB' 'KernelStack: 12352 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10576100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196696 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.276 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.276 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.538 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.538 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.539 23:19:05 -- setup/common.sh@33 -- # echo 0 00:05:44.539 23:19:05 -- setup/common.sh@33 -- # return 0 00:05:44.539 23:19:05 -- setup/hugepages.sh@100 -- # resv=0 00:05:44.539 23:19:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:44.539 nr_hugepages=1024 00:05:44.539 23:19:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:44.539 resv_hugepages=0 00:05:44.539 23:19:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:44.539 surplus_hugepages=0 00:05:44.539 23:19:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:44.539 anon_hugepages=0 00:05:44.539 23:19:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.539 23:19:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:44.539 23:19:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:44.539 23:19:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:44.539 23:19:05 -- setup/common.sh@18 -- # local node= 00:05:44.539 23:19:05 -- setup/common.sh@19 -- # local var val 00:05:44.539 23:19:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.539 23:19:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.539 23:19:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.539 23:19:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.539 23:19:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.539 23:19:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25135224 kB' 'MemAvailable: 30158036 kB' 'Buffers: 2704 kB' 'Cached: 13960808 kB' 'SwapCached: 0 kB' 'Active: 9844268 kB' 'Inactive: 4662844 kB' 'Active(anon): 9443252 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546920 kB' 'Mapped: 202624 kB' 'Shmem: 8899652 kB' 'KReclaimable: 505292 kB' 'Slab: 900752 kB' 'SReclaimable: 505292 kB' 'SUnreclaim: 395460 kB' 'KernelStack: 12368 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10576112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.539 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.539 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.540 23:19:05 -- setup/common.sh@33 -- # echo 1024 00:05:44.540 23:19:05 -- setup/common.sh@33 -- # return 0 00:05:44.540 23:19:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.540 23:19:05 -- setup/hugepages.sh@112 -- # get_nodes 00:05:44.540 23:19:05 -- setup/hugepages.sh@27 -- # local node 00:05:44.540 23:19:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:44.540 23:19:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:44.540 23:19:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:44.540 23:19:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:44.540 23:19:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:44.540 23:19:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:44.540 23:19:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:44.540 23:19:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:44.540 23:19:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:44.540 23:19:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.540 23:19:05 -- setup/common.sh@18 -- # local node=0 00:05:44.540 23:19:05 -- setup/common.sh@19 -- # local var val 00:05:44.540 23:19:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.540 23:19:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.540 23:19:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:44.540 23:19:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:44.540 23:19:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.540 23:19:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 10804228 kB' 'MemUsed: 13815184 kB' 'SwapCached: 0 kB' 'Active: 7267328 kB' 'Inactive: 3443236 kB' 'Active(anon): 7104408 kB' 'Inactive(anon): 0 kB' 'Active(file): 162920 kB' 'Inactive(file): 3443236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10565712 kB' 'Mapped: 125604 kB' 'AnonPages: 147956 kB' 'Shmem: 6959556 kB' 'KernelStack: 7608 kB' 'PageTables: 4716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 376504 kB' 'Slab: 582096 kB' 'SReclaimable: 376504 kB' 'SUnreclaim: 205592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.540 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.540 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # continue 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.541 23:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.541 23:19:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.541 23:19:05 -- setup/common.sh@33 -- # echo 0 00:05:44.541 23:19:05 -- setup/common.sh@33 -- # return 0 00:05:44.541 23:19:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:44.541 23:19:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:44.541 23:19:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:44.541 23:19:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:44.541 23:19:05 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:44.541 node0=1024 expecting 1024 00:05:44.541 23:19:05 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:44.541 00:05:44.541 real 0m3.101s 00:05:44.541 user 0m0.917s 00:05:44.541 sys 0m1.347s 00:05:44.541 23:19:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.541 23:19:05 -- common/autotest_common.sh@10 -- # set +x 00:05:44.541 ************************************ 00:05:44.541 END TEST default_setup 00:05:44.541 ************************************ 00:05:44.541 23:19:05 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:44.541 23:19:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.541 23:19:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.541 23:19:05 -- common/autotest_common.sh@10 -- # set +x 00:05:44.541 ************************************ 00:05:44.541 START TEST per_node_1G_alloc 00:05:44.541 ************************************ 00:05:44.541 23:19:05 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:44.541 23:19:05 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:44.541 23:19:05 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:44.541 23:19:05 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:44.541 23:19:05 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:44.541 23:19:05 -- setup/hugepages.sh@51 -- # shift 00:05:44.541 23:19:05 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:44.541 23:19:05 -- setup/hugepages.sh@52 -- # local node_ids 00:05:44.541 23:19:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:44.541 23:19:05 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:44.541 23:19:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:44.541 23:19:05 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:44.541 23:19:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:44.541 23:19:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:44.542 23:19:05 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:44.542 23:19:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:44.542 23:19:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:44.542 23:19:05 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:44.542 23:19:05 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:44.542 23:19:05 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:44.542 23:19:05 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:44.542 23:19:05 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:44.542 23:19:05 -- setup/hugepages.sh@73 -- # return 0 00:05:44.542 23:19:05 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:44.542 23:19:05 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:44.542 23:19:05 -- setup/hugepages.sh@146 -- # setup output 00:05:44.542 23:19:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.542 23:19:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:45.919 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:45.919 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:45.919 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:45.919 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:45.919 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:45.919 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:45.919 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:45.919 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:45.919 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:45.919 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:45.919 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:45.919 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:45.919 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:46.179 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:46.179 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:46.179 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:46.179 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:46.179 23:19:07 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:46.179 23:19:07 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:46.179 23:19:07 -- setup/hugepages.sh@89 -- # local node 00:05:46.179 23:19:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.179 23:19:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.179 23:19:07 -- setup/hugepages.sh@92 -- # local surp 00:05:46.179 23:19:07 -- setup/hugepages.sh@93 -- # local resv 00:05:46.179 23:19:07 -- setup/hugepages.sh@94 -- # local anon 00:05:46.179 23:19:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.179 23:19:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.179 23:19:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.179 23:19:07 -- setup/common.sh@18 -- # local node= 00:05:46.179 23:19:07 -- setup/common.sh@19 -- # local var val 00:05:46.179 23:19:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.179 23:19:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.179 23:19:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.179 23:19:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.179 23:19:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.179 23:19:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.179 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.179 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.179 23:19:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25143028 kB' 'MemAvailable: 30165840 kB' 'Buffers: 2704 kB' 'Cached: 13960856 kB' 'SwapCached: 0 kB' 'Active: 9845092 kB' 'Inactive: 4662844 kB' 'Active(anon): 9444076 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547080 kB' 'Mapped: 202668 kB' 'Shmem: 8899700 kB' 'KReclaimable: 505292 kB' 'Slab: 900720 kB' 'SReclaimable: 505292 kB' 'SUnreclaim: 395428 kB' 'KernelStack: 12336 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10576280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196808 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:46.179 23:19:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.179 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.179 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.179 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.179 23:19:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.179 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.179 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.179 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.179 23:19:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.179 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.179 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.179 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.179 23:19:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.179 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.179 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.180 23:19:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.180 23:19:07 -- setup/common.sh@33 -- # echo 0 00:05:46.180 23:19:07 -- setup/common.sh@33 -- # return 0 00:05:46.180 23:19:07 -- setup/hugepages.sh@97 -- # anon=0 00:05:46.180 23:19:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.180 23:19:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.180 23:19:07 -- setup/common.sh@18 -- # local node= 00:05:46.180 23:19:07 -- setup/common.sh@19 -- # local var val 00:05:46.180 23:19:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.180 23:19:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.180 23:19:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.180 23:19:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.180 23:19:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.180 23:19:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.180 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25145644 kB' 'MemAvailable: 30168456 kB' 'Buffers: 2704 kB' 'Cached: 13960860 kB' 'SwapCached: 0 kB' 'Active: 9844484 kB' 'Inactive: 4662844 kB' 'Active(anon): 9443468 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546992 kB' 'Mapped: 202628 kB' 'Shmem: 8899704 kB' 'KReclaimable: 505292 kB' 'Slab: 900688 kB' 'SReclaimable: 505292 kB' 'SUnreclaim: 395396 kB' 'KernelStack: 12368 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10576292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196776 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.181 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.181 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.182 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.182 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.444 23:19:07 -- setup/common.sh@33 -- # echo 0 00:05:46.444 23:19:07 -- setup/common.sh@33 -- # return 0 00:05:46.444 23:19:07 -- setup/hugepages.sh@99 -- # surp=0 00:05:46.444 23:19:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.444 23:19:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.444 23:19:07 -- setup/common.sh@18 -- # local node= 00:05:46.444 23:19:07 -- setup/common.sh@19 -- # local var val 00:05:46.444 23:19:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.444 23:19:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.444 23:19:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.444 23:19:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.444 23:19:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.444 23:19:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25145648 kB' 'MemAvailable: 30168460 kB' 'Buffers: 2704 kB' 'Cached: 13960872 kB' 'SwapCached: 0 kB' 'Active: 9844556 kB' 'Inactive: 4662844 kB' 'Active(anon): 9443540 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546972 kB' 'Mapped: 202628 kB' 'Shmem: 8899716 kB' 'KReclaimable: 505292 kB' 'Slab: 900768 kB' 'SReclaimable: 505292 kB' 'SUnreclaim: 395476 kB' 'KernelStack: 12368 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10576304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196776 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.444 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.444 23:19:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.445 23:19:07 -- setup/common.sh@33 -- # echo 0 00:05:46.445 23:19:07 -- setup/common.sh@33 -- # return 0 00:05:46.445 23:19:07 -- setup/hugepages.sh@100 -- # resv=0 00:05:46.445 23:19:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:46.445 nr_hugepages=1024 00:05:46.445 23:19:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.445 resv_hugepages=0 00:05:46.445 23:19:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.445 surplus_hugepages=0 00:05:46.445 23:19:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.445 anon_hugepages=0 00:05:46.445 23:19:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.445 23:19:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:46.445 23:19:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.445 23:19:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.445 23:19:07 -- setup/common.sh@18 -- # local node= 00:05:46.445 23:19:07 -- setup/common.sh@19 -- # local var val 00:05:46.445 23:19:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.445 23:19:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.445 23:19:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.445 23:19:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.445 23:19:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.445 23:19:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25145768 kB' 'MemAvailable: 30168580 kB' 'Buffers: 2704 kB' 'Cached: 13960888 kB' 'SwapCached: 0 kB' 'Active: 9846612 kB' 'Inactive: 4662844 kB' 'Active(anon): 9445596 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549024 kB' 'Mapped: 203064 kB' 'Shmem: 8899732 kB' 'KReclaimable: 505292 kB' 'Slab: 900768 kB' 'SReclaimable: 505292 kB' 'SUnreclaim: 395476 kB' 'KernelStack: 12320 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10578496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196712 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.445 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.445 23:19:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.446 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.446 23:19:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.447 23:19:07 -- setup/common.sh@33 -- # echo 1024 00:05:46.447 23:19:07 -- setup/common.sh@33 -- # return 0 00:05:46.447 23:19:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.447 23:19:07 -- setup/hugepages.sh@112 -- # get_nodes 00:05:46.447 23:19:07 -- setup/hugepages.sh@27 -- # local node 00:05:46.447 23:19:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.447 23:19:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:46.447 23:19:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.447 23:19:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:46.447 23:19:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:46.447 23:19:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:46.447 23:19:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.447 23:19:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.447 23:19:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:46.447 23:19:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.447 23:19:07 -- setup/common.sh@18 -- # local node=0 00:05:46.447 23:19:07 -- setup/common.sh@19 -- # local var val 00:05:46.447 23:19:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.447 23:19:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.447 23:19:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:46.447 23:19:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:46.447 23:19:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.447 23:19:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 11869596 kB' 'MemUsed: 12749816 kB' 'SwapCached: 0 kB' 'Active: 7268112 kB' 'Inactive: 3443236 kB' 'Active(anon): 7105192 kB' 'Inactive(anon): 0 kB' 'Active(file): 162920 kB' 'Inactive(file): 3443236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10565768 kB' 'Mapped: 125608 kB' 'AnonPages: 148704 kB' 'Shmem: 6959612 kB' 'KernelStack: 7640 kB' 'PageTables: 4768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 376504 kB' 'Slab: 582132 kB' 'SReclaimable: 376504 kB' 'SUnreclaim: 205628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.447 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.447 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@33 -- # echo 0 00:05:46.448 23:19:07 -- setup/common.sh@33 -- # return 0 00:05:46.448 23:19:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.448 23:19:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.448 23:19:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.448 23:19:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:46.448 23:19:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.448 23:19:07 -- setup/common.sh@18 -- # local node=1 00:05:46.448 23:19:07 -- setup/common.sh@19 -- # local var val 00:05:46.448 23:19:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.448 23:19:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.448 23:19:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:46.448 23:19:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:46.448 23:19:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.448 23:19:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407260 kB' 'MemFree: 13269280 kB' 'MemUsed: 6137980 kB' 'SwapCached: 0 kB' 'Active: 2582224 kB' 'Inactive: 1219608 kB' 'Active(anon): 2344128 kB' 'Inactive(anon): 0 kB' 'Active(file): 238096 kB' 'Inactive(file): 1219608 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3397840 kB' 'Mapped: 77936 kB' 'AnonPages: 404056 kB' 'Shmem: 1940136 kB' 'KernelStack: 4728 kB' 'PageTables: 3564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128788 kB' 'Slab: 318636 kB' 'SReclaimable: 128788 kB' 'SUnreclaim: 189848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.448 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.448 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # continue 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.449 23:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.449 23:19:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.449 23:19:07 -- setup/common.sh@33 -- # echo 0 00:05:46.449 23:19:07 -- setup/common.sh@33 -- # return 0 00:05:46.449 23:19:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.449 23:19:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.449 23:19:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.449 23:19:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.449 23:19:07 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:46.449 node0=512 expecting 512 00:05:46.449 23:19:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.449 23:19:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.449 23:19:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.449 23:19:07 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:46.449 node1=512 expecting 512 00:05:46.449 23:19:07 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:46.449 00:05:46.449 real 0m1.911s 00:05:46.449 user 0m0.794s 00:05:46.449 sys 0m1.091s 00:05:46.449 23:19:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.449 23:19:07 -- common/autotest_common.sh@10 -- # set +x 00:05:46.449 ************************************ 00:05:46.449 END TEST per_node_1G_alloc 00:05:46.449 ************************************ 00:05:46.449 23:19:07 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:46.449 23:19:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.449 23:19:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.449 23:19:07 -- common/autotest_common.sh@10 -- # set +x 00:05:46.449 ************************************ 00:05:46.449 START TEST even_2G_alloc 00:05:46.449 ************************************ 00:05:46.449 23:19:07 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:46.449 23:19:07 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:46.449 23:19:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:46.449 23:19:07 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:46.449 23:19:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:46.449 23:19:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:46.449 23:19:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:46.449 23:19:07 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:46.449 23:19:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:46.449 23:19:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:46.449 23:19:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:46.449 23:19:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:46.449 23:19:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:46.449 23:19:07 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:46.449 23:19:07 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:46.449 23:19:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:46.449 23:19:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:46.449 23:19:07 -- setup/hugepages.sh@83 -- # : 512 00:05:46.449 23:19:07 -- setup/hugepages.sh@84 -- # : 1 00:05:46.449 23:19:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:46.449 23:19:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:46.449 23:19:07 -- setup/hugepages.sh@83 -- # : 0 00:05:46.449 23:19:07 -- setup/hugepages.sh@84 -- # : 0 00:05:46.449 23:19:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:46.449 23:19:07 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:46.449 23:19:07 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:46.449 23:19:07 -- setup/hugepages.sh@153 -- # setup output 00:05:46.449 23:19:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.449 23:19:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:48.355 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:48.355 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:48.355 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:48.355 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:48.355 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:48.355 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:48.355 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:48.355 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:48.355 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:48.355 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:48.355 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:48.355 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:48.355 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:48.355 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:48.355 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:48.355 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:48.355 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:48.355 23:19:09 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:48.355 23:19:09 -- setup/hugepages.sh@89 -- # local node 00:05:48.355 23:19:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:48.355 23:19:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:48.355 23:19:09 -- setup/hugepages.sh@92 -- # local surp 00:05:48.355 23:19:09 -- setup/hugepages.sh@93 -- # local resv 00:05:48.355 23:19:09 -- setup/hugepages.sh@94 -- # local anon 00:05:48.355 23:19:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:48.355 23:19:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:48.355 23:19:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:48.355 23:19:09 -- setup/common.sh@18 -- # local node= 00:05:48.355 23:19:09 -- setup/common.sh@19 -- # local var val 00:05:48.355 23:19:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.355 23:19:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.355 23:19:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.355 23:19:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.355 23:19:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.355 23:19:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25145948 kB' 'MemAvailable: 30168760 kB' 'Buffers: 2704 kB' 'Cached: 13960956 kB' 'SwapCached: 0 kB' 'Active: 9841824 kB' 'Inactive: 4662844 kB' 'Active(anon): 9440808 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544276 kB' 'Mapped: 201712 kB' 'Shmem: 8899800 kB' 'KReclaimable: 505292 kB' 'Slab: 900580 kB' 'SReclaimable: 505292 kB' 'SUnreclaim: 395288 kB' 'KernelStack: 12352 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10562664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196696 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.355 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.355 23:19:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.356 23:19:09 -- setup/common.sh@33 -- # echo 0 00:05:48.356 23:19:09 -- setup/common.sh@33 -- # return 0 00:05:48.356 23:19:09 -- setup/hugepages.sh@97 -- # anon=0 00:05:48.356 23:19:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:48.356 23:19:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.356 23:19:09 -- setup/common.sh@18 -- # local node= 00:05:48.356 23:19:09 -- setup/common.sh@19 -- # local var val 00:05:48.356 23:19:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.356 23:19:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.356 23:19:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.356 23:19:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.356 23:19:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.356 23:19:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25146820 kB' 'MemAvailable: 30169632 kB' 'Buffers: 2704 kB' 'Cached: 13960956 kB' 'SwapCached: 0 kB' 'Active: 9841528 kB' 'Inactive: 4662844 kB' 'Active(anon): 9440512 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543972 kB' 'Mapped: 201696 kB' 'Shmem: 8899800 kB' 'KReclaimable: 505292 kB' 'Slab: 900580 kB' 'SReclaimable: 505292 kB' 'SUnreclaim: 395288 kB' 'KernelStack: 12336 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10562676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.356 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.356 23:19:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.357 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.357 23:19:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.358 23:19:09 -- setup/common.sh@33 -- # echo 0 00:05:48.358 23:19:09 -- setup/common.sh@33 -- # return 0 00:05:48.358 23:19:09 -- setup/hugepages.sh@99 -- # surp=0 00:05:48.358 23:19:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:48.358 23:19:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:48.358 23:19:09 -- setup/common.sh@18 -- # local node= 00:05:48.358 23:19:09 -- setup/common.sh@19 -- # local var val 00:05:48.358 23:19:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.358 23:19:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.358 23:19:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.358 23:19:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.358 23:19:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.358 23:19:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25146876 kB' 'MemAvailable: 30169688 kB' 'Buffers: 2704 kB' 'Cached: 13960968 kB' 'SwapCached: 0 kB' 'Active: 9841404 kB' 'Inactive: 4662844 kB' 'Active(anon): 9440388 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543808 kB' 'Mapped: 201592 kB' 'Shmem: 8899812 kB' 'KReclaimable: 505292 kB' 'Slab: 900564 kB' 'SReclaimable: 505292 kB' 'SUnreclaim: 395272 kB' 'KernelStack: 12336 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10562688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.358 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.358 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.359 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.359 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.360 23:19:09 -- setup/common.sh@33 -- # echo 0 00:05:48.360 23:19:09 -- setup/common.sh@33 -- # return 0 00:05:48.360 23:19:09 -- setup/hugepages.sh@100 -- # resv=0 00:05:48.360 23:19:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:48.360 nr_hugepages=1024 00:05:48.360 23:19:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:48.360 resv_hugepages=0 00:05:48.360 23:19:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:48.360 surplus_hugepages=0 00:05:48.360 23:19:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:48.360 anon_hugepages=0 00:05:48.360 23:19:09 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:48.360 23:19:09 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:48.360 23:19:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:48.360 23:19:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:48.360 23:19:09 -- setup/common.sh@18 -- # local node= 00:05:48.360 23:19:09 -- setup/common.sh@19 -- # local var val 00:05:48.360 23:19:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.360 23:19:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.360 23:19:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.360 23:19:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.360 23:19:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.360 23:19:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25147312 kB' 'MemAvailable: 30170124 kB' 'Buffers: 2704 kB' 'Cached: 13960984 kB' 'SwapCached: 0 kB' 'Active: 9841392 kB' 'Inactive: 4662844 kB' 'Active(anon): 9440376 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543808 kB' 'Mapped: 201592 kB' 'Shmem: 8899828 kB' 'KReclaimable: 505292 kB' 'Slab: 900564 kB' 'SReclaimable: 505292 kB' 'SUnreclaim: 395272 kB' 'KernelStack: 12336 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10562704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.360 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.360 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.361 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.361 23:19:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.361 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.361 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.361 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.361 23:19:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.361 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.361 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.361 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.361 23:19:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.361 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.361 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.361 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.361 23:19:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.361 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.361 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.361 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.361 23:19:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.361 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.361 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.361 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.361 23:19:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.361 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.361 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.361 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.361 23:19:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.361 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.361 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.621 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.621 23:19:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.621 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.621 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.621 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.621 23:19:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.621 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.621 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.621 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.621 23:19:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.621 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.621 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.621 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.621 23:19:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.621 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.621 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.621 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.621 23:19:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.621 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.621 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.621 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.621 23:19:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.621 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.622 23:19:09 -- setup/common.sh@33 -- # echo 1024 00:05:48.622 23:19:09 -- setup/common.sh@33 -- # return 0 00:05:48.622 23:19:09 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:48.622 23:19:09 -- setup/hugepages.sh@112 -- # get_nodes 00:05:48.622 23:19:09 -- setup/hugepages.sh@27 -- # local node 00:05:48.622 23:19:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:48.622 23:19:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:48.622 23:19:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:48.622 23:19:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:48.622 23:19:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:48.622 23:19:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:48.622 23:19:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:48.622 23:19:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:48.622 23:19:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:48.622 23:19:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.622 23:19:09 -- setup/common.sh@18 -- # local node=0 00:05:48.622 23:19:09 -- setup/common.sh@19 -- # local var val 00:05:48.622 23:19:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.622 23:19:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.622 23:19:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:48.622 23:19:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:48.622 23:19:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.622 23:19:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 11867256 kB' 'MemUsed: 12752156 kB' 'SwapCached: 0 kB' 'Active: 7266776 kB' 'Inactive: 3443236 kB' 'Active(anon): 7103856 kB' 'Inactive(anon): 0 kB' 'Active(file): 162920 kB' 'Inactive(file): 3443236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10565848 kB' 'Mapped: 124824 kB' 'AnonPages: 147276 kB' 'Shmem: 6959692 kB' 'KernelStack: 7592 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 376504 kB' 'Slab: 582024 kB' 'SReclaimable: 376504 kB' 'SUnreclaim: 205520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.622 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.622 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@33 -- # echo 0 00:05:48.623 23:19:09 -- setup/common.sh@33 -- # return 0 00:05:48.623 23:19:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:48.623 23:19:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:48.623 23:19:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:48.623 23:19:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:48.623 23:19:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.623 23:19:09 -- setup/common.sh@18 -- # local node=1 00:05:48.623 23:19:09 -- setup/common.sh@19 -- # local var val 00:05:48.623 23:19:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.623 23:19:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.623 23:19:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:48.623 23:19:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:48.623 23:19:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.623 23:19:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407260 kB' 'MemFree: 13282116 kB' 'MemUsed: 6125144 kB' 'SwapCached: 0 kB' 'Active: 2574596 kB' 'Inactive: 1219608 kB' 'Active(anon): 2336500 kB' 'Inactive(anon): 0 kB' 'Active(file): 238096 kB' 'Inactive(file): 1219608 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3397856 kB' 'Mapped: 76768 kB' 'AnonPages: 396416 kB' 'Shmem: 1940152 kB' 'KernelStack: 4728 kB' 'PageTables: 3520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128788 kB' 'Slab: 318540 kB' 'SReclaimable: 128788 kB' 'SUnreclaim: 189752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.623 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.623 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # continue 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.624 23:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.624 23:19:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.624 23:19:09 -- setup/common.sh@33 -- # echo 0 00:05:48.624 23:19:09 -- setup/common.sh@33 -- # return 0 00:05:48.624 23:19:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:48.624 23:19:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:48.624 23:19:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:48.624 23:19:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:48.624 23:19:09 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:48.624 node0=512 expecting 512 00:05:48.624 23:19:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:48.624 23:19:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:48.624 23:19:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:48.624 23:19:09 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:48.624 node1=512 expecting 512 00:05:48.624 23:19:09 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:48.624 00:05:48.624 real 0m2.083s 00:05:48.624 user 0m0.888s 00:05:48.624 sys 0m1.171s 00:05:48.624 23:19:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.624 23:19:09 -- common/autotest_common.sh@10 -- # set +x 00:05:48.624 ************************************ 00:05:48.624 END TEST even_2G_alloc 00:05:48.624 ************************************ 00:05:48.624 23:19:09 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:48.624 23:19:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.624 23:19:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.624 23:19:09 -- common/autotest_common.sh@10 -- # set +x 00:05:48.624 ************************************ 00:05:48.624 START TEST odd_alloc 00:05:48.624 ************************************ 00:05:48.624 23:19:09 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:48.624 23:19:09 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:48.624 23:19:09 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:48.624 23:19:09 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:48.625 23:19:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:48.625 23:19:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:48.625 23:19:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:48.625 23:19:09 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:48.625 23:19:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:48.625 23:19:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:48.625 23:19:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:48.625 23:19:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:48.625 23:19:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:48.625 23:19:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:48.625 23:19:09 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:48.625 23:19:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:48.625 23:19:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:48.625 23:19:09 -- setup/hugepages.sh@83 -- # : 513 00:05:48.625 23:19:09 -- setup/hugepages.sh@84 -- # : 1 00:05:48.625 23:19:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:48.625 23:19:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:48.625 23:19:09 -- setup/hugepages.sh@83 -- # : 0 00:05:48.625 23:19:09 -- setup/hugepages.sh@84 -- # : 0 00:05:48.625 23:19:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:48.625 23:19:09 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:48.625 23:19:09 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:48.625 23:19:09 -- setup/hugepages.sh@160 -- # setup output 00:05:48.625 23:19:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.625 23:19:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:50.002 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:50.002 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:50.002 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:50.002 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:50.002 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:50.002 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:50.002 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:50.002 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:50.002 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:50.002 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:50.002 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:50.002 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:50.002 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:50.002 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:50.002 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:50.002 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:50.002 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:50.265 23:19:11 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:50.265 23:19:11 -- setup/hugepages.sh@89 -- # local node 00:05:50.265 23:19:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:50.265 23:19:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:50.265 23:19:11 -- setup/hugepages.sh@92 -- # local surp 00:05:50.265 23:19:11 -- setup/hugepages.sh@93 -- # local resv 00:05:50.265 23:19:11 -- setup/hugepages.sh@94 -- # local anon 00:05:50.265 23:19:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:50.265 23:19:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:50.265 23:19:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:50.265 23:19:11 -- setup/common.sh@18 -- # local node= 00:05:50.265 23:19:11 -- setup/common.sh@19 -- # local var val 00:05:50.265 23:19:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:50.265 23:19:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.265 23:19:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.265 23:19:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.265 23:19:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.266 23:19:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25126364 kB' 'MemAvailable: 30149216 kB' 'Buffers: 2704 kB' 'Cached: 13961056 kB' 'SwapCached: 0 kB' 'Active: 9842572 kB' 'Inactive: 4662844 kB' 'Active(anon): 9441556 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545084 kB' 'Mapped: 201596 kB' 'Shmem: 8899900 kB' 'KReclaimable: 505332 kB' 'Slab: 900704 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395372 kB' 'KernelStack: 12320 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 10562896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196664 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.266 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.266 23:19:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.267 23:19:11 -- setup/common.sh@33 -- # echo 0 00:05:50.267 23:19:11 -- setup/common.sh@33 -- # return 0 00:05:50.267 23:19:11 -- setup/hugepages.sh@97 -- # anon=0 00:05:50.267 23:19:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:50.267 23:19:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:50.267 23:19:11 -- setup/common.sh@18 -- # local node= 00:05:50.267 23:19:11 -- setup/common.sh@19 -- # local var val 00:05:50.267 23:19:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:50.267 23:19:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.267 23:19:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.267 23:19:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.267 23:19:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.267 23:19:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25127628 kB' 'MemAvailable: 30150480 kB' 'Buffers: 2704 kB' 'Cached: 13961060 kB' 'SwapCached: 0 kB' 'Active: 9842376 kB' 'Inactive: 4662844 kB' 'Active(anon): 9441360 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544760 kB' 'Mapped: 201672 kB' 'Shmem: 8899904 kB' 'KReclaimable: 505332 kB' 'Slab: 900712 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395380 kB' 'KernelStack: 12288 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 10562908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196616 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.267 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.267 23:19:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.268 23:19:11 -- setup/common.sh@33 -- # echo 0 00:05:50.268 23:19:11 -- setup/common.sh@33 -- # return 0 00:05:50.268 23:19:11 -- setup/hugepages.sh@99 -- # surp=0 00:05:50.268 23:19:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:50.268 23:19:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:50.268 23:19:11 -- setup/common.sh@18 -- # local node= 00:05:50.268 23:19:11 -- setup/common.sh@19 -- # local var val 00:05:50.268 23:19:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:50.268 23:19:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.268 23:19:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.268 23:19:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.268 23:19:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.268 23:19:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25131112 kB' 'MemAvailable: 30153964 kB' 'Buffers: 2704 kB' 'Cached: 13961072 kB' 'SwapCached: 0 kB' 'Active: 9842084 kB' 'Inactive: 4662844 kB' 'Active(anon): 9441068 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544448 kB' 'Mapped: 201600 kB' 'Shmem: 8899916 kB' 'KReclaimable: 505332 kB' 'Slab: 900680 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395348 kB' 'KernelStack: 12352 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 10562924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196600 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.268 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.268 23:19:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.269 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.269 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.270 23:19:11 -- setup/common.sh@33 -- # echo 0 00:05:50.270 23:19:11 -- setup/common.sh@33 -- # return 0 00:05:50.270 23:19:11 -- setup/hugepages.sh@100 -- # resv=0 00:05:50.270 23:19:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:50.270 nr_hugepages=1025 00:05:50.270 23:19:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:50.270 resv_hugepages=0 00:05:50.270 23:19:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:50.270 surplus_hugepages=0 00:05:50.270 23:19:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:50.270 anon_hugepages=0 00:05:50.270 23:19:11 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:50.270 23:19:11 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:50.270 23:19:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:50.270 23:19:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:50.270 23:19:11 -- setup/common.sh@18 -- # local node= 00:05:50.270 23:19:11 -- setup/common.sh@19 -- # local var val 00:05:50.270 23:19:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:50.270 23:19:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.270 23:19:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.270 23:19:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.270 23:19:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.270 23:19:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25131176 kB' 'MemAvailable: 30154028 kB' 'Buffers: 2704 kB' 'Cached: 13961084 kB' 'SwapCached: 0 kB' 'Active: 9842136 kB' 'Inactive: 4662844 kB' 'Active(anon): 9441120 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544448 kB' 'Mapped: 201600 kB' 'Shmem: 8899928 kB' 'KReclaimable: 505332 kB' 'Slab: 900680 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395348 kB' 'KernelStack: 12352 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 10562936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196584 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.270 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.270 23:19:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.533 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.533 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.534 23:19:11 -- setup/common.sh@33 -- # echo 1025 00:05:50.534 23:19:11 -- setup/common.sh@33 -- # return 0 00:05:50.534 23:19:11 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:50.534 23:19:11 -- setup/hugepages.sh@112 -- # get_nodes 00:05:50.534 23:19:11 -- setup/hugepages.sh@27 -- # local node 00:05:50.534 23:19:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:50.534 23:19:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:50.534 23:19:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:50.534 23:19:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:50.534 23:19:11 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:50.534 23:19:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:50.534 23:19:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:50.534 23:19:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:50.534 23:19:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:50.534 23:19:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:50.534 23:19:11 -- setup/common.sh@18 -- # local node=0 00:05:50.534 23:19:11 -- setup/common.sh@19 -- # local var val 00:05:50.534 23:19:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:50.534 23:19:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.534 23:19:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:50.534 23:19:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:50.534 23:19:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.534 23:19:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 11867404 kB' 'MemUsed: 12752008 kB' 'SwapCached: 0 kB' 'Active: 7266652 kB' 'Inactive: 3443236 kB' 'Active(anon): 7103732 kB' 'Inactive(anon): 0 kB' 'Active(file): 162920 kB' 'Inactive(file): 3443236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10565900 kB' 'Mapped: 124832 kB' 'AnonPages: 147152 kB' 'Shmem: 6959744 kB' 'KernelStack: 7592 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 376504 kB' 'Slab: 582104 kB' 'SReclaimable: 376504 kB' 'SUnreclaim: 205600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.534 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.534 23:19:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.535 23:19:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.535 23:19:11 -- setup/common.sh@33 -- # echo 0 00:05:50.535 23:19:11 -- setup/common.sh@33 -- # return 0 00:05:50.535 23:19:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:50.535 23:19:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:50.535 23:19:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:50.535 23:19:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:50.535 23:19:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:50.535 23:19:11 -- setup/common.sh@18 -- # local node=1 00:05:50.535 23:19:11 -- setup/common.sh@19 -- # local var val 00:05:50.535 23:19:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:50.535 23:19:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.535 23:19:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:50.535 23:19:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:50.535 23:19:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.535 23:19:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.535 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407260 kB' 'MemFree: 13263556 kB' 'MemUsed: 6143704 kB' 'SwapCached: 0 kB' 'Active: 2575480 kB' 'Inactive: 1219608 kB' 'Active(anon): 2337384 kB' 'Inactive(anon): 0 kB' 'Active(file): 238096 kB' 'Inactive(file): 1219608 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3397916 kB' 'Mapped: 76768 kB' 'AnonPages: 397284 kB' 'Shmem: 1940212 kB' 'KernelStack: 4760 kB' 'PageTables: 3624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128828 kB' 'Slab: 318568 kB' 'SReclaimable: 128828 kB' 'SUnreclaim: 189740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # continue 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.536 23:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.536 23:19:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.536 23:19:11 -- setup/common.sh@33 -- # echo 0 00:05:50.536 23:19:11 -- setup/common.sh@33 -- # return 0 00:05:50.536 23:19:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:50.536 23:19:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:50.536 23:19:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:50.536 23:19:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:50.536 23:19:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:50.536 node0=512 expecting 513 00:05:50.536 23:19:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:50.536 23:19:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:50.536 23:19:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:50.537 23:19:11 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:50.537 node1=513 expecting 512 00:05:50.537 23:19:11 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:50.537 00:05:50.537 real 0m1.859s 00:05:50.537 user 0m0.771s 00:05:50.537 sys 0m1.064s 00:05:50.537 23:19:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.537 23:19:11 -- common/autotest_common.sh@10 -- # set +x 00:05:50.537 ************************************ 00:05:50.537 END TEST odd_alloc 00:05:50.537 ************************************ 00:05:50.537 23:19:11 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:50.537 23:19:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.537 23:19:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.537 23:19:11 -- common/autotest_common.sh@10 -- # set +x 00:05:50.537 ************************************ 00:05:50.537 START TEST custom_alloc 00:05:50.537 ************************************ 00:05:50.537 23:19:11 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:50.537 23:19:11 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:50.537 23:19:11 -- setup/hugepages.sh@169 -- # local node 00:05:50.537 23:19:11 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:50.537 23:19:11 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:50.537 23:19:11 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:50.537 23:19:11 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:50.537 23:19:11 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:50.537 23:19:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:50.537 23:19:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:50.537 23:19:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:50.537 23:19:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:50.537 23:19:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:50.537 23:19:11 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:50.537 23:19:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:50.537 23:19:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:50.537 23:19:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:50.537 23:19:11 -- setup/hugepages.sh@83 -- # : 256 00:05:50.537 23:19:11 -- setup/hugepages.sh@84 -- # : 1 00:05:50.537 23:19:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:50.537 23:19:11 -- setup/hugepages.sh@83 -- # : 0 00:05:50.537 23:19:11 -- setup/hugepages.sh@84 -- # : 0 00:05:50.537 23:19:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:50.537 23:19:11 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:50.537 23:19:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:50.537 23:19:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:50.537 23:19:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:50.537 23:19:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:50.537 23:19:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:50.537 23:19:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:50.537 23:19:11 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:50.537 23:19:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:50.537 23:19:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:50.537 23:19:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:50.537 23:19:11 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:50.537 23:19:11 -- setup/hugepages.sh@78 -- # return 0 00:05:50.537 23:19:11 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:50.537 23:19:11 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:50.537 23:19:11 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:50.537 23:19:11 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:50.537 23:19:11 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:50.537 23:19:11 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:50.537 23:19:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:50.537 23:19:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:50.537 23:19:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:50.537 23:19:11 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:50.537 23:19:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:50.537 23:19:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:50.537 23:19:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:50.537 23:19:11 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:50.537 23:19:11 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:50.537 23:19:11 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:50.537 23:19:11 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:50.537 23:19:11 -- setup/hugepages.sh@78 -- # return 0 00:05:50.537 23:19:11 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:50.537 23:19:11 -- setup/hugepages.sh@187 -- # setup output 00:05:50.537 23:19:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.537 23:19:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:51.917 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:51.917 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:51.917 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:51.917 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:51.917 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:51.917 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:51.917 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:51.917 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:51.917 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:51.917 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:51.917 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:51.917 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:51.917 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:51.917 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:51.917 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:51.917 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:52.178 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:52.178 23:19:13 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:52.178 23:19:13 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:52.178 23:19:13 -- setup/hugepages.sh@89 -- # local node 00:05:52.178 23:19:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:52.178 23:19:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:52.178 23:19:13 -- setup/hugepages.sh@92 -- # local surp 00:05:52.178 23:19:13 -- setup/hugepages.sh@93 -- # local resv 00:05:52.178 23:19:13 -- setup/hugepages.sh@94 -- # local anon 00:05:52.178 23:19:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:52.178 23:19:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:52.178 23:19:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:52.178 23:19:13 -- setup/common.sh@18 -- # local node= 00:05:52.178 23:19:13 -- setup/common.sh@19 -- # local var val 00:05:52.178 23:19:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:52.178 23:19:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.178 23:19:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.178 23:19:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.178 23:19:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.178 23:19:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.178 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 23:19:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 24081448 kB' 'MemAvailable: 29104300 kB' 'Buffers: 2704 kB' 'Cached: 13961156 kB' 'SwapCached: 0 kB' 'Active: 9842532 kB' 'Inactive: 4662844 kB' 'Active(anon): 9441516 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544700 kB' 'Mapped: 201604 kB' 'Shmem: 8900000 kB' 'KReclaimable: 505332 kB' 'Slab: 900688 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395356 kB' 'KernelStack: 12336 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 10563128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:52.178 23:19:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.178 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 23:19:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.178 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 23:19:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 23:19:13 -- setup/common.sh@33 -- # echo 0 00:05:52.179 23:19:13 -- setup/common.sh@33 -- # return 0 00:05:52.179 23:19:13 -- setup/hugepages.sh@97 -- # anon=0 00:05:52.179 23:19:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:52.179 23:19:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:52.179 23:19:13 -- setup/common.sh@18 -- # local node= 00:05:52.179 23:19:13 -- setup/common.sh@19 -- # local var val 00:05:52.179 23:19:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:52.179 23:19:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.179 23:19:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.179 23:19:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.179 23:19:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.179 23:19:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 24090420 kB' 'MemAvailable: 29113272 kB' 'Buffers: 2704 kB' 'Cached: 13961156 kB' 'SwapCached: 0 kB' 'Active: 9843024 kB' 'Inactive: 4662844 kB' 'Active(anon): 9442008 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545336 kB' 'Mapped: 201604 kB' 'Shmem: 8900000 kB' 'KReclaimable: 505332 kB' 'Slab: 900652 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395320 kB' 'KernelStack: 12352 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 10563140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 23:19:13 -- setup/common.sh@33 -- # echo 0 00:05:52.181 23:19:13 -- setup/common.sh@33 -- # return 0 00:05:52.181 23:19:13 -- setup/hugepages.sh@99 -- # surp=0 00:05:52.181 23:19:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:52.181 23:19:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:52.181 23:19:13 -- setup/common.sh@18 -- # local node= 00:05:52.181 23:19:13 -- setup/common.sh@19 -- # local var val 00:05:52.181 23:19:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:52.181 23:19:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.181 23:19:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.181 23:19:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.181 23:19:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.181 23:19:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 24090292 kB' 'MemAvailable: 29113144 kB' 'Buffers: 2704 kB' 'Cached: 13961168 kB' 'SwapCached: 0 kB' 'Active: 9843232 kB' 'Inactive: 4662844 kB' 'Active(anon): 9442216 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545576 kB' 'Mapped: 201604 kB' 'Shmem: 8900012 kB' 'KReclaimable: 505332 kB' 'Slab: 900716 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395384 kB' 'KernelStack: 12416 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 10565948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196632 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.181 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.443 23:19:13 -- setup/common.sh@33 -- # echo 0 00:05:52.443 23:19:13 -- setup/common.sh@33 -- # return 0 00:05:52.443 23:19:13 -- setup/hugepages.sh@100 -- # resv=0 00:05:52.443 23:19:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:52.443 nr_hugepages=1536 00:05:52.443 23:19:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:52.443 resv_hugepages=0 00:05:52.443 23:19:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:52.443 surplus_hugepages=0 00:05:52.443 23:19:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:52.443 anon_hugepages=0 00:05:52.443 23:19:13 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:52.443 23:19:13 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:52.443 23:19:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:52.443 23:19:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:52.443 23:19:13 -- setup/common.sh@18 -- # local node= 00:05:52.443 23:19:13 -- setup/common.sh@19 -- # local var val 00:05:52.443 23:19:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:52.443 23:19:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.443 23:19:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.443 23:19:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.443 23:19:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.443 23:19:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 24089940 kB' 'MemAvailable: 29112792 kB' 'Buffers: 2704 kB' 'Cached: 13961168 kB' 'SwapCached: 0 kB' 'Active: 9844756 kB' 'Inactive: 4662844 kB' 'Active(anon): 9443740 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547132 kB' 'Mapped: 202040 kB' 'Shmem: 8900012 kB' 'KReclaimable: 505332 kB' 'Slab: 900684 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395352 kB' 'KernelStack: 12544 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 10570164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196776 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.443 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.443 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.444 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.444 23:19:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.444 23:19:13 -- setup/common.sh@33 -- # echo 1536 00:05:52.444 23:19:13 -- setup/common.sh@33 -- # return 0 00:05:52.444 23:19:13 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:52.444 23:19:13 -- setup/hugepages.sh@112 -- # get_nodes 00:05:52.444 23:19:13 -- setup/hugepages.sh@27 -- # local node 00:05:52.444 23:19:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:52.444 23:19:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:52.444 23:19:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:52.444 23:19:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:52.444 23:19:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:52.444 23:19:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:52.444 23:19:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:52.445 23:19:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:52.445 23:19:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:52.445 23:19:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:52.445 23:19:13 -- setup/common.sh@18 -- # local node=0 00:05:52.445 23:19:13 -- setup/common.sh@19 -- # local var val 00:05:52.445 23:19:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:52.445 23:19:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.445 23:19:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:52.445 23:19:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:52.445 23:19:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.445 23:19:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 11859036 kB' 'MemUsed: 12760376 kB' 'SwapCached: 0 kB' 'Active: 7272004 kB' 'Inactive: 3443236 kB' 'Active(anon): 7109084 kB' 'Inactive(anon): 0 kB' 'Active(file): 162920 kB' 'Inactive(file): 3443236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10565908 kB' 'Mapped: 124836 kB' 'AnonPages: 152464 kB' 'Shmem: 6959752 kB' 'KernelStack: 7928 kB' 'PageTables: 5184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 376504 kB' 'Slab: 582184 kB' 'SReclaimable: 376504 kB' 'SUnreclaim: 205680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.445 23:19:13 -- setup/common.sh@33 -- # echo 0 00:05:52.445 23:19:13 -- setup/common.sh@33 -- # return 0 00:05:52.445 23:19:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:52.445 23:19:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:52.445 23:19:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:52.445 23:19:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:52.445 23:19:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:52.445 23:19:13 -- setup/common.sh@18 -- # local node=1 00:05:52.445 23:19:13 -- setup/common.sh@19 -- # local var val 00:05:52.445 23:19:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:52.445 23:19:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.445 23:19:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:52.445 23:19:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:52.445 23:19:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.445 23:19:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.445 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.445 23:19:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407260 kB' 'MemFree: 12226744 kB' 'MemUsed: 7180516 kB' 'SwapCached: 0 kB' 'Active: 2575696 kB' 'Inactive: 1219608 kB' 'Active(anon): 2337600 kB' 'Inactive(anon): 0 kB' 'Active(file): 238096 kB' 'Inactive(file): 1219608 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3398008 kB' 'Mapped: 77684 kB' 'AnonPages: 397436 kB' 'Shmem: 1940304 kB' 'KernelStack: 4760 kB' 'PageTables: 3580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128828 kB' 'Slab: 318500 kB' 'SReclaimable: 128828 kB' 'SUnreclaim: 189672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # continue 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:52.446 23:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:52.446 23:19:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.446 23:19:13 -- setup/common.sh@33 -- # echo 0 00:05:52.446 23:19:13 -- setup/common.sh@33 -- # return 0 00:05:52.446 23:19:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:52.446 23:19:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:52.446 23:19:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:52.446 23:19:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:52.446 23:19:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:52.446 node0=512 expecting 512 00:05:52.446 23:19:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:52.446 23:19:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:52.446 23:19:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:52.446 23:19:13 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:52.446 node1=1024 expecting 1024 00:05:52.446 23:19:13 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:52.446 00:05:52.446 real 0m1.924s 00:05:52.446 user 0m0.758s 00:05:52.446 sys 0m1.140s 00:05:52.446 23:19:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.446 23:19:13 -- common/autotest_common.sh@10 -- # set +x 00:05:52.446 ************************************ 00:05:52.446 END TEST custom_alloc 00:05:52.446 ************************************ 00:05:52.446 23:19:13 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:52.446 23:19:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.446 23:19:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.446 23:19:13 -- common/autotest_common.sh@10 -- # set +x 00:05:52.446 ************************************ 00:05:52.446 START TEST no_shrink_alloc 00:05:52.446 ************************************ 00:05:52.446 23:19:13 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:52.446 23:19:13 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:52.446 23:19:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:52.446 23:19:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:52.446 23:19:13 -- setup/hugepages.sh@51 -- # shift 00:05:52.446 23:19:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:52.446 23:19:13 -- setup/hugepages.sh@52 -- # local node_ids 00:05:52.446 23:19:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:52.446 23:19:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:52.446 23:19:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:52.446 23:19:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:52.446 23:19:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:52.446 23:19:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:52.446 23:19:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:52.446 23:19:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:52.446 23:19:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:52.447 23:19:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:52.447 23:19:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:52.447 23:19:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:52.447 23:19:13 -- setup/hugepages.sh@73 -- # return 0 00:05:52.447 23:19:13 -- setup/hugepages.sh@198 -- # setup output 00:05:52.447 23:19:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.447 23:19:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:54.357 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:54.357 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:54.357 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:54.357 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:54.357 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:54.357 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:54.357 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:54.357 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:54.357 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:54.357 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:54.357 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:54.357 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:54.357 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:54.357 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:54.357 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:54.357 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:54.357 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:54.357 23:19:15 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:54.357 23:19:15 -- setup/hugepages.sh@89 -- # local node 00:05:54.357 23:19:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:54.357 23:19:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:54.357 23:19:15 -- setup/hugepages.sh@92 -- # local surp 00:05:54.357 23:19:15 -- setup/hugepages.sh@93 -- # local resv 00:05:54.357 23:19:15 -- setup/hugepages.sh@94 -- # local anon 00:05:54.357 23:19:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:54.357 23:19:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:54.357 23:19:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:54.357 23:19:15 -- setup/common.sh@18 -- # local node= 00:05:54.357 23:19:15 -- setup/common.sh@19 -- # local var val 00:05:54.357 23:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.357 23:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.357 23:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.357 23:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.357 23:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.357 23:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.357 23:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25044740 kB' 'MemAvailable: 30067592 kB' 'Buffers: 2704 kB' 'Cached: 13961248 kB' 'SwapCached: 0 kB' 'Active: 9842724 kB' 'Inactive: 4662844 kB' 'Active(anon): 9441708 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544764 kB' 'Mapped: 201616 kB' 'Shmem: 8900092 kB' 'KReclaimable: 505332 kB' 'Slab: 900752 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395420 kB' 'KernelStack: 12352 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10563352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196808 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.357 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.357 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.358 23:19:15 -- setup/common.sh@33 -- # echo 0 00:05:54.358 23:19:15 -- setup/common.sh@33 -- # return 0 00:05:54.358 23:19:15 -- setup/hugepages.sh@97 -- # anon=0 00:05:54.358 23:19:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:54.358 23:19:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:54.358 23:19:15 -- setup/common.sh@18 -- # local node= 00:05:54.358 23:19:15 -- setup/common.sh@19 -- # local var val 00:05:54.358 23:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.358 23:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.358 23:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.358 23:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.358 23:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.358 23:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25050664 kB' 'MemAvailable: 30073516 kB' 'Buffers: 2704 kB' 'Cached: 13961252 kB' 'SwapCached: 0 kB' 'Active: 9842620 kB' 'Inactive: 4662844 kB' 'Active(anon): 9441604 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544672 kB' 'Mapped: 201612 kB' 'Shmem: 8900096 kB' 'KReclaimable: 505332 kB' 'Slab: 900744 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395412 kB' 'KernelStack: 12368 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10564116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196792 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.358 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.358 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.359 23:19:15 -- setup/common.sh@33 -- # echo 0 00:05:54.359 23:19:15 -- setup/common.sh@33 -- # return 0 00:05:54.359 23:19:15 -- setup/hugepages.sh@99 -- # surp=0 00:05:54.359 23:19:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:54.359 23:19:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:54.359 23:19:15 -- setup/common.sh@18 -- # local node= 00:05:54.359 23:19:15 -- setup/common.sh@19 -- # local var val 00:05:54.359 23:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.359 23:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.359 23:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.359 23:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.359 23:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.359 23:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25055124 kB' 'MemAvailable: 30077976 kB' 'Buffers: 2704 kB' 'Cached: 13961264 kB' 'SwapCached: 0 kB' 'Active: 9842852 kB' 'Inactive: 4662844 kB' 'Active(anon): 9441836 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545068 kB' 'Mapped: 201612 kB' 'Shmem: 8900108 kB' 'KReclaimable: 505332 kB' 'Slab: 900732 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395400 kB' 'KernelStack: 12416 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10563012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196744 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.359 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.359 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.360 23:19:15 -- setup/common.sh@33 -- # echo 0 00:05:54.360 23:19:15 -- setup/common.sh@33 -- # return 0 00:05:54.360 23:19:15 -- setup/hugepages.sh@100 -- # resv=0 00:05:54.360 23:19:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:54.360 nr_hugepages=1024 00:05:54.360 23:19:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:54.360 resv_hugepages=0 00:05:54.360 23:19:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:54.360 surplus_hugepages=0 00:05:54.360 23:19:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:54.360 anon_hugepages=0 00:05:54.360 23:19:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:54.360 23:19:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:54.360 23:19:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:54.360 23:19:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:54.360 23:19:15 -- setup/common.sh@18 -- # local node= 00:05:54.360 23:19:15 -- setup/common.sh@19 -- # local var val 00:05:54.360 23:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.360 23:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.360 23:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.360 23:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.360 23:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.360 23:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25054664 kB' 'MemAvailable: 30077516 kB' 'Buffers: 2704 kB' 'Cached: 13961276 kB' 'SwapCached: 0 kB' 'Active: 9842240 kB' 'Inactive: 4662844 kB' 'Active(anon): 9441224 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544348 kB' 'Mapped: 201612 kB' 'Shmem: 8900120 kB' 'KReclaimable: 505332 kB' 'Slab: 900748 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395416 kB' 'KernelStack: 12352 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10563024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196696 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.360 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.360 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.361 23:19:15 -- setup/common.sh@33 -- # echo 1024 00:05:54.361 23:19:15 -- setup/common.sh@33 -- # return 0 00:05:54.361 23:19:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:54.361 23:19:15 -- setup/hugepages.sh@112 -- # get_nodes 00:05:54.361 23:19:15 -- setup/hugepages.sh@27 -- # local node 00:05:54.361 23:19:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:54.361 23:19:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:54.361 23:19:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:54.361 23:19:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:54.361 23:19:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:54.361 23:19:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:54.361 23:19:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:54.361 23:19:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:54.361 23:19:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:54.361 23:19:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:54.361 23:19:15 -- setup/common.sh@18 -- # local node=0 00:05:54.361 23:19:15 -- setup/common.sh@19 -- # local var val 00:05:54.361 23:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.361 23:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.361 23:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:54.361 23:19:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:54.361 23:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.361 23:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 10816768 kB' 'MemUsed: 13802644 kB' 'SwapCached: 0 kB' 'Active: 7266904 kB' 'Inactive: 3443236 kB' 'Active(anon): 7103984 kB' 'Inactive(anon): 0 kB' 'Active(file): 162920 kB' 'Inactive(file): 3443236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10565916 kB' 'Mapped: 124844 kB' 'AnonPages: 147372 kB' 'Shmem: 6959760 kB' 'KernelStack: 7624 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 376504 kB' 'Slab: 582184 kB' 'SReclaimable: 376504 kB' 'SUnreclaim: 205680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.361 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.361 23:19:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.362 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.362 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.362 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.362 23:19:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.362 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.362 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.362 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.362 23:19:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.362 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.362 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.362 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.362 23:19:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.362 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.362 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.362 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.362 23:19:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.362 23:19:15 -- setup/common.sh@32 -- # continue 00:05:54.362 23:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.362 23:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.362 23:19:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.362 23:19:15 -- setup/common.sh@33 -- # echo 0 00:05:54.362 23:19:15 -- setup/common.sh@33 -- # return 0 00:05:54.362 23:19:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:54.362 23:19:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:54.362 23:19:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:54.362 23:19:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:54.362 23:19:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:54.362 node0=1024 expecting 1024 00:05:54.362 23:19:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:54.362 23:19:15 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:54.362 23:19:15 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:54.362 23:19:15 -- setup/hugepages.sh@202 -- # setup output 00:05:54.362 23:19:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.362 23:19:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:55.738 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:55.738 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:55.738 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:55.738 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:55.738 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:55.738 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:55.738 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:55.738 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:55.738 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:55.738 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:55.738 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:55.738 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:55.738 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:55.738 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:55.738 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:55.738 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:55.738 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:56.000 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:56.000 23:19:16 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:56.000 23:19:16 -- setup/hugepages.sh@89 -- # local node 00:05:56.000 23:19:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:56.000 23:19:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:56.000 23:19:16 -- setup/hugepages.sh@92 -- # local surp 00:05:56.000 23:19:16 -- setup/hugepages.sh@93 -- # local resv 00:05:56.000 23:19:16 -- setup/hugepages.sh@94 -- # local anon 00:05:56.000 23:19:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:56.000 23:19:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:56.000 23:19:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:56.000 23:19:16 -- setup/common.sh@18 -- # local node= 00:05:56.000 23:19:16 -- setup/common.sh@19 -- # local var val 00:05:56.000 23:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.000 23:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.000 23:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.000 23:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.000 23:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.000 23:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.000 23:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25069660 kB' 'MemAvailable: 30092512 kB' 'Buffers: 2704 kB' 'Cached: 13961328 kB' 'SwapCached: 0 kB' 'Active: 9847124 kB' 'Inactive: 4662844 kB' 'Active(anon): 9446108 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549084 kB' 'Mapped: 202948 kB' 'Shmem: 8900172 kB' 'KReclaimable: 505332 kB' 'Slab: 900636 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395304 kB' 'KernelStack: 12784 kB' 'PageTables: 8924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10604696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197144 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.000 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.000 23:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.001 23:19:16 -- setup/common.sh@33 -- # echo 0 00:05:56.001 23:19:16 -- setup/common.sh@33 -- # return 0 00:05:56.001 23:19:16 -- setup/hugepages.sh@97 -- # anon=0 00:05:56.001 23:19:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:56.001 23:19:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:56.001 23:19:16 -- setup/common.sh@18 -- # local node= 00:05:56.001 23:19:16 -- setup/common.sh@19 -- # local var val 00:05:56.001 23:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.001 23:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.001 23:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.001 23:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.001 23:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.001 23:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25073928 kB' 'MemAvailable: 30096780 kB' 'Buffers: 2704 kB' 'Cached: 13961332 kB' 'SwapCached: 0 kB' 'Active: 9850168 kB' 'Inactive: 4662844 kB' 'Active(anon): 9449152 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552092 kB' 'Mapped: 202948 kB' 'Shmem: 8900176 kB' 'KReclaimable: 505332 kB' 'Slab: 900616 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395284 kB' 'KernelStack: 12784 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10605820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.001 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.001 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.002 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.002 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.003 23:19:16 -- setup/common.sh@33 -- # echo 0 00:05:56.003 23:19:16 -- setup/common.sh@33 -- # return 0 00:05:56.003 23:19:16 -- setup/hugepages.sh@99 -- # surp=0 00:05:56.003 23:19:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:56.003 23:19:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:56.003 23:19:16 -- setup/common.sh@18 -- # local node= 00:05:56.003 23:19:16 -- setup/common.sh@19 -- # local var val 00:05:56.003 23:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.003 23:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.003 23:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.003 23:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.003 23:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.003 23:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25072872 kB' 'MemAvailable: 30095724 kB' 'Buffers: 2704 kB' 'Cached: 13961344 kB' 'SwapCached: 0 kB' 'Active: 9850980 kB' 'Inactive: 4662844 kB' 'Active(anon): 9449964 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552960 kB' 'Mapped: 203396 kB' 'Shmem: 8900188 kB' 'KReclaimable: 505332 kB' 'Slab: 900616 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395284 kB' 'KernelStack: 12912 kB' 'PageTables: 9680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10608300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.003 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.003 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.004 23:19:16 -- setup/common.sh@33 -- # echo 0 00:05:56.004 23:19:16 -- setup/common.sh@33 -- # return 0 00:05:56.004 23:19:16 -- setup/hugepages.sh@100 -- # resv=0 00:05:56.004 23:19:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:56.004 nr_hugepages=1024 00:05:56.004 23:19:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:56.004 resv_hugepages=0 00:05:56.004 23:19:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:56.004 surplus_hugepages=0 00:05:56.004 23:19:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:56.004 anon_hugepages=0 00:05:56.004 23:19:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:56.004 23:19:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:56.004 23:19:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:56.004 23:19:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:56.004 23:19:16 -- setup/common.sh@18 -- # local node= 00:05:56.004 23:19:16 -- setup/common.sh@19 -- # local var val 00:05:56.004 23:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.004 23:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.004 23:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.004 23:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.004 23:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.004 23:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25068996 kB' 'MemAvailable: 30091848 kB' 'Buffers: 2704 kB' 'Cached: 13961348 kB' 'SwapCached: 0 kB' 'Active: 9847688 kB' 'Inactive: 4662844 kB' 'Active(anon): 9446672 kB' 'Inactive(anon): 0 kB' 'Active(file): 401016 kB' 'Inactive(file): 4662844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549684 kB' 'Mapped: 202920 kB' 'Shmem: 8900192 kB' 'KReclaimable: 505332 kB' 'Slab: 900680 kB' 'SReclaimable: 505332 kB' 'SUnreclaim: 395348 kB' 'KernelStack: 12560 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10601476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196888 kB' 'VmallocChunk: 0 kB' 'Percpu: 51840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1867356 kB' 'DirectMap2M: 20072448 kB' 'DirectMap1G: 30408704 kB' 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.004 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.004 23:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.005 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.005 23:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.005 23:19:16 -- setup/common.sh@33 -- # echo 1024 00:05:56.005 23:19:16 -- setup/common.sh@33 -- # return 0 00:05:56.005 23:19:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:56.005 23:19:16 -- setup/hugepages.sh@112 -- # get_nodes 00:05:56.005 23:19:16 -- setup/hugepages.sh@27 -- # local node 00:05:56.005 23:19:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:56.005 23:19:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:56.005 23:19:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:56.005 23:19:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:56.005 23:19:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:56.005 23:19:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:56.005 23:19:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:56.006 23:19:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:56.006 23:19:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:56.006 23:19:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:56.006 23:19:16 -- setup/common.sh@18 -- # local node=0 00:05:56.006 23:19:16 -- setup/common.sh@19 -- # local var val 00:05:56.006 23:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.006 23:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.006 23:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:56.006 23:19:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:56.006 23:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.006 23:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 10822992 kB' 'MemUsed: 13796420 kB' 'SwapCached: 0 kB' 'Active: 7268856 kB' 'Inactive: 3443236 kB' 'Active(anon): 7105936 kB' 'Inactive(anon): 0 kB' 'Active(file): 162920 kB' 'Inactive(file): 3443236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10565932 kB' 'Mapped: 125012 kB' 'AnonPages: 149280 kB' 'Shmem: 6959776 kB' 'KernelStack: 7784 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 376504 kB' 'Slab: 582156 kB' 'SReclaimable: 376504 kB' 'SUnreclaim: 205652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # continue 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.006 23:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.006 23:19:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.006 23:19:16 -- setup/common.sh@33 -- # echo 0 00:05:56.007 23:19:16 -- setup/common.sh@33 -- # return 0 00:05:56.007 23:19:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:56.007 23:19:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:56.007 23:19:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:56.007 23:19:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:56.007 23:19:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:56.007 node0=1024 expecting 1024 00:05:56.007 23:19:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:56.007 00:05:56.007 real 0m3.612s 00:05:56.007 user 0m1.403s 00:05:56.007 sys 0m2.161s 00:05:56.007 23:19:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.007 23:19:16 -- common/autotest_common.sh@10 -- # set +x 00:05:56.007 ************************************ 00:05:56.007 END TEST no_shrink_alloc 00:05:56.007 ************************************ 00:05:56.007 23:19:16 -- setup/hugepages.sh@217 -- # clear_hp 00:05:56.007 23:19:16 -- setup/hugepages.sh@37 -- # local node hp 00:05:56.007 23:19:16 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:56.007 23:19:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:56.007 23:19:16 -- setup/hugepages.sh@41 -- # echo 0 00:05:56.007 23:19:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:56.007 23:19:16 -- setup/hugepages.sh@41 -- # echo 0 00:05:56.007 23:19:16 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:56.007 23:19:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:56.007 23:19:16 -- setup/hugepages.sh@41 -- # echo 0 00:05:56.007 23:19:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:56.007 23:19:16 -- setup/hugepages.sh@41 -- # echo 0 00:05:56.007 23:19:16 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:56.007 23:19:16 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:56.007 00:05:56.007 real 0m14.840s 00:05:56.007 user 0m5.693s 00:05:56.007 sys 0m8.200s 00:05:56.007 23:19:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.007 23:19:16 -- common/autotest_common.sh@10 -- # set +x 00:05:56.007 ************************************ 00:05:56.007 END TEST hugepages 00:05:56.007 ************************************ 00:05:56.265 23:19:16 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:56.265 23:19:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.265 23:19:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.265 23:19:16 -- common/autotest_common.sh@10 -- # set +x 00:05:56.265 ************************************ 00:05:56.265 START TEST driver 00:05:56.265 ************************************ 00:05:56.265 23:19:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:56.265 * Looking for test storage... 00:05:56.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:56.265 23:19:17 -- setup/driver.sh@68 -- # setup reset 00:05:56.265 23:19:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:56.265 23:19:17 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:59.596 23:19:20 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:59.596 23:19:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:59.597 23:19:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.597 23:19:20 -- common/autotest_common.sh@10 -- # set +x 00:05:59.597 ************************************ 00:05:59.597 START TEST guess_driver 00:05:59.597 ************************************ 00:05:59.597 23:19:20 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:59.597 23:19:20 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:59.597 23:19:20 -- setup/driver.sh@47 -- # local fail=0 00:05:59.597 23:19:20 -- setup/driver.sh@49 -- # pick_driver 00:05:59.597 23:19:20 -- setup/driver.sh@36 -- # vfio 00:05:59.597 23:19:20 -- setup/driver.sh@21 -- # local iommu_grups 00:05:59.597 23:19:20 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:59.597 23:19:20 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:59.597 23:19:20 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:59.597 23:19:20 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:59.597 23:19:20 -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:05:59.597 23:19:20 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:59.597 23:19:20 -- setup/driver.sh@14 -- # mod vfio_pci 00:05:59.597 23:19:20 -- setup/driver.sh@12 -- # dep vfio_pci 00:05:59.597 23:19:20 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:59.597 23:19:20 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:59.597 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:59.597 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:59.597 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:59.597 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:59.597 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:59.597 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:59.597 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:59.597 23:19:20 -- setup/driver.sh@30 -- # return 0 00:05:59.597 23:19:20 -- setup/driver.sh@37 -- # echo vfio-pci 00:05:59.597 23:19:20 -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:59.597 23:19:20 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:59.597 23:19:20 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:59.597 Looking for driver=vfio-pci 00:05:59.597 23:19:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:59.597 23:19:20 -- setup/driver.sh@45 -- # setup output config 00:05:59.597 23:19:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:59.597 23:19:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.979 23:19:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.979 23:19:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.979 23:19:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.919 23:19:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.919 23:19:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.919 23:19:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.919 23:19:22 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:01.919 23:19:22 -- setup/driver.sh@65 -- # setup reset 00:06:01.919 23:19:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:01.919 23:19:22 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:05.208 00:06:05.208 real 0m5.809s 00:06:05.208 user 0m1.335s 00:06:05.208 sys 0m2.561s 00:06:05.208 23:19:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.208 23:19:25 -- common/autotest_common.sh@10 -- # set +x 00:06:05.208 ************************************ 00:06:05.208 END TEST guess_driver 00:06:05.208 ************************************ 00:06:05.208 00:06:05.208 real 0m8.873s 00:06:05.208 user 0m2.061s 00:06:05.208 sys 0m3.948s 00:06:05.208 23:19:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.208 23:19:25 -- common/autotest_common.sh@10 -- # set +x 00:06:05.208 ************************************ 00:06:05.208 END TEST driver 00:06:05.208 ************************************ 00:06:05.208 23:19:25 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:05.208 23:19:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.208 23:19:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.208 23:19:25 -- common/autotest_common.sh@10 -- # set +x 00:06:05.208 ************************************ 00:06:05.208 START TEST devices 00:06:05.209 ************************************ 00:06:05.209 23:19:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:05.209 * Looking for test storage... 00:06:05.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:05.209 23:19:25 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:05.209 23:19:25 -- setup/devices.sh@192 -- # setup reset 00:06:05.209 23:19:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:05.209 23:19:25 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:07.120 23:19:27 -- setup/devices.sh@194 -- # get_zoned_devs 00:06:07.120 23:19:27 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:06:07.120 23:19:27 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:06:07.120 23:19:27 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:06:07.120 23:19:27 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:06:07.120 23:19:27 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:06:07.120 23:19:27 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:06:07.120 23:19:27 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:07.120 23:19:27 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:06:07.120 23:19:27 -- setup/devices.sh@196 -- # blocks=() 00:06:07.120 23:19:27 -- setup/devices.sh@196 -- # declare -a blocks 00:06:07.120 23:19:27 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:07.120 23:19:27 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:07.120 23:19:27 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:07.120 23:19:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:07.120 23:19:27 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:07.120 23:19:27 -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:07.120 23:19:27 -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:06:07.120 23:19:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:06:07.120 23:19:27 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:07.120 23:19:27 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:06:07.120 23:19:27 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:06:07.380 No valid GPT data, bailing 00:06:07.380 23:19:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:07.380 23:19:28 -- scripts/common.sh@393 -- # pt= 00:06:07.380 23:19:28 -- scripts/common.sh@394 -- # return 1 00:06:07.380 23:19:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:07.380 23:19:28 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:07.380 23:19:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:07.380 23:19:28 -- setup/common.sh@80 -- # echo 1000204886016 00:06:07.380 23:19:28 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:06:07.380 23:19:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:07.380 23:19:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:06:07.380 23:19:28 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:07.380 23:19:28 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:07.380 23:19:28 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:07.380 23:19:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.380 23:19:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.380 23:19:28 -- common/autotest_common.sh@10 -- # set +x 00:06:07.380 ************************************ 00:06:07.380 START TEST nvme_mount 00:06:07.380 ************************************ 00:06:07.380 23:19:28 -- common/autotest_common.sh@1104 -- # nvme_mount 00:06:07.380 23:19:28 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:07.380 23:19:28 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:07.380 23:19:28 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:07.380 23:19:28 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:07.380 23:19:28 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:07.380 23:19:28 -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:07.380 23:19:28 -- setup/common.sh@40 -- # local part_no=1 00:06:07.380 23:19:28 -- setup/common.sh@41 -- # local size=1073741824 00:06:07.380 23:19:28 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:07.380 23:19:28 -- setup/common.sh@44 -- # parts=() 00:06:07.380 23:19:28 -- setup/common.sh@44 -- # local parts 00:06:07.380 23:19:28 -- setup/common.sh@46 -- # (( part = 1 )) 00:06:07.380 23:19:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:07.380 23:19:28 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:07.380 23:19:28 -- setup/common.sh@46 -- # (( part++ )) 00:06:07.380 23:19:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:07.380 23:19:28 -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:07.380 23:19:28 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:07.380 23:19:28 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:08.317 Creating new GPT entries in memory. 00:06:08.317 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:08.317 other utilities. 00:06:08.317 23:19:29 -- setup/common.sh@57 -- # (( part = 1 )) 00:06:08.317 23:19:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:08.317 23:19:29 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:08.317 23:19:29 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:08.317 23:19:29 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:09.258 Creating new GPT entries in memory. 00:06:09.258 The operation has completed successfully. 00:06:09.258 23:19:30 -- setup/common.sh@57 -- # (( part++ )) 00:06:09.258 23:19:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:09.258 23:19:30 -- setup/common.sh@62 -- # wait 118989 00:06:09.258 23:19:30 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:09.258 23:19:30 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:06:09.259 23:19:30 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:09.259 23:19:30 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:09.259 23:19:30 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:09.517 23:19:30 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:09.517 23:19:30 -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:09.517 23:19:30 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:09.517 23:19:30 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:09.517 23:19:30 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:09.517 23:19:30 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:09.517 23:19:30 -- setup/devices.sh@53 -- # local found=0 00:06:09.517 23:19:30 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:09.517 23:19:30 -- setup/devices.sh@56 -- # : 00:06:09.517 23:19:30 -- setup/devices.sh@59 -- # local pci status 00:06:09.517 23:19:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.517 23:19:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:09.517 23:19:30 -- setup/devices.sh@47 -- # setup output config 00:06:09.517 23:19:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:09.517 23:19:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:10.892 23:19:31 -- setup/devices.sh@63 -- # found=1 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.892 23:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:10.892 23:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.152 23:19:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:11.152 23:19:31 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:11.152 23:19:31 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:11.152 23:19:31 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:11.152 23:19:31 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:11.152 23:19:31 -- setup/devices.sh@110 -- # cleanup_nvme 00:06:11.152 23:19:31 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:11.152 23:19:31 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:11.152 23:19:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:11.152 23:19:31 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:11.152 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:11.152 23:19:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:11.152 23:19:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:11.411 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:11.411 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:11.411 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:11.411 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:11.411 23:19:32 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:06:11.411 23:19:32 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:06:11.411 23:19:32 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:11.411 23:19:32 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:11.411 23:19:32 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:11.411 23:19:32 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:11.411 23:19:32 -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:11.411 23:19:32 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:11.411 23:19:32 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:11.411 23:19:32 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:11.411 23:19:32 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:11.411 23:19:32 -- setup/devices.sh@53 -- # local found=0 00:06:11.411 23:19:32 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:11.411 23:19:32 -- setup/devices.sh@56 -- # : 00:06:11.411 23:19:32 -- setup/devices.sh@59 -- # local pci status 00:06:11.411 23:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.411 23:19:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:11.411 23:19:32 -- setup/devices.sh@47 -- # setup output config 00:06:11.411 23:19:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:11.411 23:19:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:12.785 23:19:33 -- setup/devices.sh@63 -- # found=1 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.785 23:19:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:12.785 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.044 23:19:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:13.044 23:19:33 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:13.044 23:19:33 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:13.044 23:19:33 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:13.044 23:19:33 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:13.044 23:19:33 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:13.044 23:19:33 -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:06:13.044 23:19:33 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:13.044 23:19:33 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:13.044 23:19:33 -- setup/devices.sh@50 -- # local mount_point= 00:06:13.044 23:19:33 -- setup/devices.sh@51 -- # local test_file= 00:06:13.044 23:19:33 -- setup/devices.sh@53 -- # local found=0 00:06:13.044 23:19:33 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:13.044 23:19:33 -- setup/devices.sh@59 -- # local pci status 00:06:13.044 23:19:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.044 23:19:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:13.044 23:19:33 -- setup/devices.sh@47 -- # setup output config 00:06:13.044 23:19:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:13.044 23:19:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:14.423 23:19:35 -- setup/devices.sh@63 -- # found=1 00:06:14.423 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.423 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.423 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.423 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.423 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.423 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.423 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.423 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.423 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.423 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.423 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.423 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.423 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.423 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.423 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.682 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.682 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.682 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.682 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.682 23:19:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:14.682 23:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.682 23:19:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:14.682 23:19:35 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:14.682 23:19:35 -- setup/devices.sh@68 -- # return 0 00:06:14.682 23:19:35 -- setup/devices.sh@128 -- # cleanup_nvme 00:06:14.682 23:19:35 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:14.682 23:19:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:14.682 23:19:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:14.682 23:19:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:14.682 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:14.682 00:06:14.682 real 0m7.488s 00:06:14.682 user 0m1.858s 00:06:14.682 sys 0m3.279s 00:06:14.682 23:19:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.682 23:19:35 -- common/autotest_common.sh@10 -- # set +x 00:06:14.682 ************************************ 00:06:14.682 END TEST nvme_mount 00:06:14.682 ************************************ 00:06:14.682 23:19:35 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:14.682 23:19:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:14.682 23:19:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.682 23:19:35 -- common/autotest_common.sh@10 -- # set +x 00:06:14.682 ************************************ 00:06:14.682 START TEST dm_mount 00:06:14.682 ************************************ 00:06:14.682 23:19:35 -- common/autotest_common.sh@1104 -- # dm_mount 00:06:14.682 23:19:35 -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:14.682 23:19:35 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:14.682 23:19:35 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:14.682 23:19:35 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:14.682 23:19:35 -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:14.682 23:19:35 -- setup/common.sh@40 -- # local part_no=2 00:06:14.682 23:19:35 -- setup/common.sh@41 -- # local size=1073741824 00:06:14.682 23:19:35 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:14.682 23:19:35 -- setup/common.sh@44 -- # parts=() 00:06:14.682 23:19:35 -- setup/common.sh@44 -- # local parts 00:06:14.682 23:19:35 -- setup/common.sh@46 -- # (( part = 1 )) 00:06:14.683 23:19:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:14.683 23:19:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:14.683 23:19:35 -- setup/common.sh@46 -- # (( part++ )) 00:06:14.683 23:19:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:14.683 23:19:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:14.683 23:19:35 -- setup/common.sh@46 -- # (( part++ )) 00:06:14.941 23:19:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:14.941 23:19:35 -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:14.941 23:19:35 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:14.941 23:19:35 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:15.879 Creating new GPT entries in memory. 00:06:15.879 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:15.879 other utilities. 00:06:15.879 23:19:36 -- setup/common.sh@57 -- # (( part = 1 )) 00:06:15.879 23:19:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:15.879 23:19:36 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:15.879 23:19:36 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:15.879 23:19:36 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:16.817 Creating new GPT entries in memory. 00:06:16.817 The operation has completed successfully. 00:06:16.817 23:19:37 -- setup/common.sh@57 -- # (( part++ )) 00:06:16.817 23:19:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:16.817 23:19:37 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:16.817 23:19:37 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:16.817 23:19:37 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:17.754 The operation has completed successfully. 00:06:17.754 23:19:38 -- setup/common.sh@57 -- # (( part++ )) 00:06:17.754 23:19:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:17.754 23:19:38 -- setup/common.sh@62 -- # wait 121555 00:06:18.013 23:19:38 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:18.013 23:19:38 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:18.013 23:19:38 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:18.013 23:19:38 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:18.013 23:19:38 -- setup/devices.sh@160 -- # for t in {1..5} 00:06:18.013 23:19:38 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:18.013 23:19:38 -- setup/devices.sh@161 -- # break 00:06:18.013 23:19:38 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:18.013 23:19:38 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:18.013 23:19:38 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:18.013 23:19:38 -- setup/devices.sh@166 -- # dm=dm-0 00:06:18.013 23:19:38 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:18.013 23:19:38 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:18.013 23:19:38 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:18.013 23:19:38 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:06:18.013 23:19:38 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:18.013 23:19:38 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:18.013 23:19:38 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:18.013 23:19:38 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:18.013 23:19:38 -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:18.013 23:19:38 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:18.013 23:19:38 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:18.013 23:19:38 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:18.013 23:19:38 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:18.013 23:19:38 -- setup/devices.sh@53 -- # local found=0 00:06:18.013 23:19:38 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:18.013 23:19:38 -- setup/devices.sh@56 -- # : 00:06:18.013 23:19:38 -- setup/devices.sh@59 -- # local pci status 00:06:18.013 23:19:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.013 23:19:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:18.013 23:19:38 -- setup/devices.sh@47 -- # setup output config 00:06:18.013 23:19:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:18.013 23:19:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:19.394 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.394 23:19:40 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:19.394 23:19:40 -- setup/devices.sh@63 -- # found=1 00:06:19.394 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.394 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.394 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.394 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.394 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.394 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.394 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.394 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.395 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.395 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.395 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.395 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.395 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.395 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.395 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.395 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.395 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.395 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.395 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.395 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.395 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.395 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.395 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.395 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.395 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.395 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.395 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.395 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.395 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.395 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.395 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.395 23:19:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:19.395 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.655 23:19:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:19.655 23:19:40 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:19.655 23:19:40 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:19.655 23:19:40 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:19.655 23:19:40 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:19.655 23:19:40 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:19.655 23:19:40 -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:19.655 23:19:40 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:19.655 23:19:40 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:19.655 23:19:40 -- setup/devices.sh@50 -- # local mount_point= 00:06:19.655 23:19:40 -- setup/devices.sh@51 -- # local test_file= 00:06:19.655 23:19:40 -- setup/devices.sh@53 -- # local found=0 00:06:19.655 23:19:40 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:19.655 23:19:40 -- setup/devices.sh@59 -- # local pci status 00:06:19.655 23:19:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.655 23:19:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:19.655 23:19:40 -- setup/devices.sh@47 -- # setup output config 00:06:19.655 23:19:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:19.655 23:19:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:21.048 23:19:41 -- setup/devices.sh@63 -- # found=1 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.048 23:19:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.048 23:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.307 23:19:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:21.307 23:19:42 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:21.307 23:19:42 -- setup/devices.sh@68 -- # return 0 00:06:21.307 23:19:42 -- setup/devices.sh@187 -- # cleanup_dm 00:06:21.307 23:19:42 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:21.307 23:19:42 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:21.307 23:19:42 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:21.307 23:19:42 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:21.307 23:19:42 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:21.307 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:21.307 23:19:42 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:21.307 23:19:42 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:21.307 00:06:21.307 real 0m6.533s 00:06:21.307 user 0m1.248s 00:06:21.307 sys 0m2.194s 00:06:21.307 23:19:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.307 23:19:42 -- common/autotest_common.sh@10 -- # set +x 00:06:21.307 ************************************ 00:06:21.307 END TEST dm_mount 00:06:21.307 ************************************ 00:06:21.307 23:19:42 -- setup/devices.sh@1 -- # cleanup 00:06:21.307 23:19:42 -- setup/devices.sh@11 -- # cleanup_nvme 00:06:21.307 23:19:42 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:21.307 23:19:42 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:21.307 23:19:42 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:21.307 23:19:42 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:21.307 23:19:42 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:21.565 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:21.565 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:21.566 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:21.566 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:21.566 23:19:42 -- setup/devices.sh@12 -- # cleanup_dm 00:06:21.566 23:19:42 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:21.566 23:19:42 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:21.566 23:19:42 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:21.566 23:19:42 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:21.566 23:19:42 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:21.566 23:19:42 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:21.566 00:06:21.566 real 0m16.594s 00:06:21.566 user 0m4.014s 00:06:21.566 sys 0m6.950s 00:06:21.566 23:19:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.566 23:19:42 -- common/autotest_common.sh@10 -- # set +x 00:06:21.566 ************************************ 00:06:21.566 END TEST devices 00:06:21.566 ************************************ 00:06:21.566 00:06:21.566 real 0m54.033s 00:06:21.566 user 0m16.116s 00:06:21.566 sys 0m26.662s 00:06:21.566 23:19:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.566 23:19:42 -- common/autotest_common.sh@10 -- # set +x 00:06:21.566 ************************************ 00:06:21.566 END TEST setup.sh 00:06:21.566 ************************************ 00:06:21.825 23:19:42 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:23.201 Hugepages 00:06:23.201 node hugesize free / total 00:06:23.201 node0 1048576kB 0 / 0 00:06:23.201 node0 2048kB 2048 / 2048 00:06:23.201 node1 1048576kB 0 / 0 00:06:23.201 node1 2048kB 0 / 0 00:06:23.201 00:06:23.201 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:23.201 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:06:23.201 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:06:23.202 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:06:23.202 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:06:23.202 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:06:23.202 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:06:23.202 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:06:23.202 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:06:23.202 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:06:23.202 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:06:23.202 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:06:23.202 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:06:23.202 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:06:23.202 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:06:23.202 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:06:23.202 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:06:23.202 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:23.202 23:19:44 -- spdk/autotest.sh@141 -- # uname -s 00:06:23.460 23:19:44 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:06:23.460 23:19:44 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:06:23.460 23:19:44 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:24.837 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:25.096 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:25.096 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:25.096 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:25.096 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:25.096 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:25.096 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:25.096 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:25.096 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:25.096 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:25.096 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:25.096 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:25.096 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:25.096 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:25.096 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:25.096 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:26.033 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:06:26.033 23:19:46 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:26.988 23:19:47 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:26.988 23:19:47 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:26.988 23:19:47 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:06:26.988 23:19:47 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:06:26.988 23:19:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:26.988 23:19:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:26.988 23:19:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:26.988 23:19:47 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:26.988 23:19:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:27.246 23:19:47 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:27.246 23:19:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:06:27.246 23:19:47 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:28.623 Waiting for block devices as requested 00:06:28.623 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:06:28.884 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:28.884 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:29.143 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:29.143 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:29.143 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:29.403 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:29.403 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:29.403 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:29.403 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:29.662 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:29.662 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:29.662 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:29.662 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:29.922 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:29.922 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:29.922 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:30.183 23:19:50 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:06:30.183 23:19:50 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:06:30.183 23:19:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:30.183 23:19:50 -- common/autotest_common.sh@1487 -- # grep 0000:82:00.0/nvme/nvme 00:06:30.183 23:19:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:06:30.183 23:19:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:06:30.183 23:19:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:06:30.183 23:19:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:30.183 23:19:50 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:06:30.183 23:19:50 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:06:30.183 23:19:50 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:06:30.183 23:19:50 -- common/autotest_common.sh@1530 -- # grep oacs 00:06:30.183 23:19:50 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:06:30.183 23:19:50 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:06:30.183 23:19:50 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:06:30.183 23:19:50 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:06:30.183 23:19:50 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:06:30.183 23:19:50 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:06:30.183 23:19:50 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:06:30.183 23:19:50 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:06:30.183 23:19:50 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:06:30.183 23:19:50 -- common/autotest_common.sh@1542 -- # continue 00:06:30.183 23:19:50 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:06:30.183 23:19:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:30.183 23:19:50 -- common/autotest_common.sh@10 -- # set +x 00:06:30.183 23:19:50 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:06:30.183 23:19:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:30.183 23:19:50 -- common/autotest_common.sh@10 -- # set +x 00:06:30.183 23:19:50 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:32.092 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:32.092 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:32.092 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:32.092 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:32.092 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:32.092 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:32.092 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:32.092 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:32.092 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:32.092 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:32.092 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:32.092 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:32.092 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:32.092 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:32.092 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:32.092 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:32.661 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:06:32.921 23:19:53 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:06:32.921 23:19:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:32.921 23:19:53 -- common/autotest_common.sh@10 -- # set +x 00:06:32.921 23:19:53 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:06:32.921 23:19:53 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:32.921 23:19:53 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:32.921 23:19:53 -- common/autotest_common.sh@1562 -- # bdfs=() 00:06:32.921 23:19:53 -- common/autotest_common.sh@1562 -- # local bdfs 00:06:32.921 23:19:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:32.921 23:19:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:32.921 23:19:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:32.921 23:19:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:32.921 23:19:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:32.921 23:19:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:32.921 23:19:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:32.921 23:19:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:06:32.921 23:19:53 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:06:32.921 23:19:53 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:06:32.921 23:19:53 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:06:32.921 23:19:53 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:32.921 23:19:53 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:06:32.921 23:19:53 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:82:00.0 00:06:32.921 23:19:53 -- common/autotest_common.sh@1577 -- # [[ -z 0000:82:00.0 ]] 00:06:32.921 23:19:53 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=127105 00:06:32.921 23:19:53 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.921 23:19:53 -- common/autotest_common.sh@1583 -- # waitforlisten 127105 00:06:32.921 23:19:53 -- common/autotest_common.sh@819 -- # '[' -z 127105 ']' 00:06:32.921 23:19:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.921 23:19:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.921 23:19:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.921 23:19:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.921 23:19:53 -- common/autotest_common.sh@10 -- # set +x 00:06:33.182 [2024-07-11 23:19:53.941059] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:33.182 [2024-07-11 23:19:53.941263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127105 ] 00:06:33.182 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.182 [2024-07-11 23:19:54.054769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.442 [2024-07-11 23:19:54.150893] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.442 [2024-07-11 23:19:54.151091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.701 23:19:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.701 23:19:54 -- common/autotest_common.sh@852 -- # return 0 00:06:33.701 23:19:54 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:06:33.701 23:19:54 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:06:33.701 23:19:54 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:06:36.997 nvme0n1 00:06:36.997 23:19:57 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:37.257 [2024-07-11 23:19:58.187852] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:37.257 [2024-07-11 23:19:58.187898] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:37.257 request: 00:06:37.257 { 00:06:37.257 "nvme_ctrlr_name": "nvme0", 00:06:37.257 "password": "test", 00:06:37.257 "method": "bdev_nvme_opal_revert", 00:06:37.257 "req_id": 1 00:06:37.257 } 00:06:37.257 Got JSON-RPC error response 00:06:37.257 response: 00:06:37.257 { 00:06:37.257 "code": -32603, 00:06:37.257 "message": "Internal error" 00:06:37.257 } 00:06:37.257 23:19:58 -- common/autotest_common.sh@1589 -- # true 00:06:37.257 23:19:58 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:06:37.516 23:19:58 -- common/autotest_common.sh@1593 -- # killprocess 127105 00:06:37.516 23:19:58 -- common/autotest_common.sh@926 -- # '[' -z 127105 ']' 00:06:37.516 23:19:58 -- common/autotest_common.sh@930 -- # kill -0 127105 00:06:37.516 23:19:58 -- common/autotest_common.sh@931 -- # uname 00:06:37.516 23:19:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.516 23:19:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127105 00:06:37.516 23:19:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.516 23:19:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.516 23:19:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127105' 00:06:37.516 killing process with pid 127105 00:06:37.516 23:19:58 -- common/autotest_common.sh@945 -- # kill 127105 00:06:37.516 23:19:58 -- common/autotest_common.sh@950 -- # wait 127105 00:06:39.425 23:20:00 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:06:39.425 23:20:00 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:06:39.425 23:20:00 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:06:39.425 23:20:00 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:06:39.425 23:20:00 -- spdk/autotest.sh@173 -- # timing_enter lib 00:06:39.426 23:20:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:39.426 23:20:00 -- common/autotest_common.sh@10 -- # set +x 00:06:39.426 23:20:00 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:39.426 23:20:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.426 23:20:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.426 23:20:00 -- common/autotest_common.sh@10 -- # set +x 00:06:39.426 ************************************ 00:06:39.426 START TEST env 00:06:39.426 ************************************ 00:06:39.426 23:20:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:39.426 * Looking for test storage... 00:06:39.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:39.426 23:20:00 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:39.426 23:20:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.426 23:20:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.426 23:20:00 -- common/autotest_common.sh@10 -- # set +x 00:06:39.426 ************************************ 00:06:39.426 START TEST env_memory 00:06:39.426 ************************************ 00:06:39.426 23:20:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:39.426 00:06:39.426 00:06:39.426 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.426 http://cunit.sourceforge.net/ 00:06:39.426 00:06:39.426 00:06:39.426 Suite: memory 00:06:39.426 Test: alloc and free memory map ...[2024-07-11 23:20:00.172983] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:39.426 passed 00:06:39.426 Test: mem map translation ...[2024-07-11 23:20:00.204148] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:39.426 [2024-07-11 23:20:00.204189] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:39.426 [2024-07-11 23:20:00.204257] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:39.426 [2024-07-11 23:20:00.204276] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:39.426 passed 00:06:39.426 Test: mem map registration ...[2024-07-11 23:20:00.265758] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:39.426 [2024-07-11 23:20:00.265786] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:39.426 passed 00:06:39.426 Test: mem map adjacent registrations ...passed 00:06:39.426 00:06:39.426 Run Summary: Type Total Ran Passed Failed Inactive 00:06:39.426 suites 1 1 n/a 0 0 00:06:39.426 tests 4 4 4 0 0 00:06:39.426 asserts 152 152 152 0 n/a 00:06:39.426 00:06:39.426 Elapsed time = 0.206 seconds 00:06:39.426 00:06:39.426 real 0m0.216s 00:06:39.426 user 0m0.208s 00:06:39.426 sys 0m0.007s 00:06:39.426 23:20:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.426 23:20:00 -- common/autotest_common.sh@10 -- # set +x 00:06:39.426 ************************************ 00:06:39.426 END TEST env_memory 00:06:39.426 ************************************ 00:06:39.686 23:20:00 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:39.686 23:20:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.686 23:20:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.686 23:20:00 -- common/autotest_common.sh@10 -- # set +x 00:06:39.686 ************************************ 00:06:39.686 START TEST env_vtophys 00:06:39.686 ************************************ 00:06:39.686 23:20:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:39.686 EAL: lib.eal log level changed from notice to debug 00:06:39.686 EAL: Detected lcore 0 as core 0 on socket 0 00:06:39.686 EAL: Detected lcore 1 as core 1 on socket 0 00:06:39.686 EAL: Detected lcore 2 as core 2 on socket 0 00:06:39.686 EAL: Detected lcore 3 as core 3 on socket 0 00:06:39.686 EAL: Detected lcore 4 as core 4 on socket 0 00:06:39.686 EAL: Detected lcore 5 as core 5 on socket 0 00:06:39.686 EAL: Detected lcore 6 as core 8 on socket 0 00:06:39.686 EAL: Detected lcore 7 as core 9 on socket 0 00:06:39.686 EAL: Detected lcore 8 as core 10 on socket 0 00:06:39.686 EAL: Detected lcore 9 as core 11 on socket 0 00:06:39.686 EAL: Detected lcore 10 as core 12 on socket 0 00:06:39.686 EAL: Detected lcore 11 as core 13 on socket 0 00:06:39.686 EAL: Detected lcore 12 as core 0 on socket 1 00:06:39.686 EAL: Detected lcore 13 as core 1 on socket 1 00:06:39.686 EAL: Detected lcore 14 as core 2 on socket 1 00:06:39.686 EAL: Detected lcore 15 as core 3 on socket 1 00:06:39.686 EAL: Detected lcore 16 as core 4 on socket 1 00:06:39.686 EAL: Detected lcore 17 as core 5 on socket 1 00:06:39.686 EAL: Detected lcore 18 as core 8 on socket 1 00:06:39.686 EAL: Detected lcore 19 as core 9 on socket 1 00:06:39.686 EAL: Detected lcore 20 as core 10 on socket 1 00:06:39.686 EAL: Detected lcore 21 as core 11 on socket 1 00:06:39.687 EAL: Detected lcore 22 as core 12 on socket 1 00:06:39.687 EAL: Detected lcore 23 as core 13 on socket 1 00:06:39.687 EAL: Detected lcore 24 as core 0 on socket 0 00:06:39.687 EAL: Detected lcore 25 as core 1 on socket 0 00:06:39.687 EAL: Detected lcore 26 as core 2 on socket 0 00:06:39.687 EAL: Detected lcore 27 as core 3 on socket 0 00:06:39.687 EAL: Detected lcore 28 as core 4 on socket 0 00:06:39.687 EAL: Detected lcore 29 as core 5 on socket 0 00:06:39.687 EAL: Detected lcore 30 as core 8 on socket 0 00:06:39.687 EAL: Detected lcore 31 as core 9 on socket 0 00:06:39.687 EAL: Detected lcore 32 as core 10 on socket 0 00:06:39.687 EAL: Detected lcore 33 as core 11 on socket 0 00:06:39.687 EAL: Detected lcore 34 as core 12 on socket 0 00:06:39.687 EAL: Detected lcore 35 as core 13 on socket 0 00:06:39.687 EAL: Detected lcore 36 as core 0 on socket 1 00:06:39.687 EAL: Detected lcore 37 as core 1 on socket 1 00:06:39.687 EAL: Detected lcore 38 as core 2 on socket 1 00:06:39.687 EAL: Detected lcore 39 as core 3 on socket 1 00:06:39.687 EAL: Detected lcore 40 as core 4 on socket 1 00:06:39.687 EAL: Detected lcore 41 as core 5 on socket 1 00:06:39.687 EAL: Detected lcore 42 as core 8 on socket 1 00:06:39.687 EAL: Detected lcore 43 as core 9 on socket 1 00:06:39.687 EAL: Detected lcore 44 as core 10 on socket 1 00:06:39.687 EAL: Detected lcore 45 as core 11 on socket 1 00:06:39.687 EAL: Detected lcore 46 as core 12 on socket 1 00:06:39.687 EAL: Detected lcore 47 as core 13 on socket 1 00:06:39.687 EAL: Maximum logical cores by configuration: 128 00:06:39.687 EAL: Detected CPU lcores: 48 00:06:39.687 EAL: Detected NUMA nodes: 2 00:06:39.687 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:39.687 EAL: Detected shared linkage of DPDK 00:06:39.687 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:39.687 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:39.687 EAL: Registered [vdev] bus. 00:06:39.687 EAL: bus.vdev log level changed from disabled to notice 00:06:39.687 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:39.687 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:39.687 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:39.687 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:39.687 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:39.687 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:39.687 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:39.687 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:39.687 EAL: No shared files mode enabled, IPC will be disabled 00:06:39.687 EAL: No shared files mode enabled, IPC is disabled 00:06:39.687 EAL: Bus pci wants IOVA as 'DC' 00:06:39.687 EAL: Bus vdev wants IOVA as 'DC' 00:06:39.687 EAL: Buses did not request a specific IOVA mode. 00:06:39.687 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:39.687 EAL: Selected IOVA mode 'VA' 00:06:39.687 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.687 EAL: Probing VFIO support... 00:06:39.687 EAL: IOMMU type 1 (Type 1) is supported 00:06:39.687 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:39.687 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:39.687 EAL: VFIO support initialized 00:06:39.687 EAL: Ask a virtual area of 0x2e000 bytes 00:06:39.687 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:39.687 EAL: Setting up physically contiguous memory... 00:06:39.687 EAL: Setting maximum number of open files to 524288 00:06:39.687 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:39.687 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:39.687 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:39.687 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.687 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:39.687 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:39.687 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.687 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:39.687 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:39.687 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.687 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:39.687 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:39.687 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.687 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:39.687 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:39.687 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.687 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:39.687 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:39.687 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.687 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:39.687 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:39.687 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.687 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:39.687 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:39.687 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.687 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:39.687 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:39.687 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:39.687 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.687 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:39.687 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:39.687 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.687 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:39.687 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:39.687 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.687 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:39.687 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:39.687 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.687 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:39.687 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:39.687 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.687 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:39.687 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:39.687 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.687 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:39.687 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:39.687 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.687 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:39.687 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:39.687 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.687 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:39.687 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:39.687 EAL: Hugepages will be freed exactly as allocated. 00:06:39.687 EAL: No shared files mode enabled, IPC is disabled 00:06:39.687 EAL: No shared files mode enabled, IPC is disabled 00:06:39.687 EAL: TSC frequency is ~2700000 KHz 00:06:39.687 EAL: Main lcore 0 is ready (tid=7fe92ea4da00;cpuset=[0]) 00:06:39.687 EAL: Trying to obtain current memory policy. 00:06:39.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.687 EAL: Restoring previous memory policy: 0 00:06:39.687 EAL: request: mp_malloc_sync 00:06:39.687 EAL: No shared files mode enabled, IPC is disabled 00:06:39.687 EAL: Heap on socket 0 was expanded by 2MB 00:06:39.687 EAL: PCI device 0000:0e:00.0 on NUMA socket 0 00:06:39.687 EAL: probe driver: 8086:1583 net_i40e 00:06:39.687 EAL: Not managed by a supported kernel driver, skipped 00:06:39.687 EAL: PCI device 0000:0e:00.1 on NUMA socket 0 00:06:39.687 EAL: probe driver: 8086:1583 net_i40e 00:06:39.687 EAL: Not managed by a supported kernel driver, skipped 00:06:39.687 EAL: No shared files mode enabled, IPC is disabled 00:06:39.687 EAL: No shared files mode enabled, IPC is disabled 00:06:39.687 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:39.687 EAL: Mem event callback 'spdk:(nil)' registered 00:06:39.687 00:06:39.687 00:06:39.687 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.687 http://cunit.sourceforge.net/ 00:06:39.687 00:06:39.687 00:06:39.687 Suite: components_suite 00:06:39.687 Test: vtophys_malloc_test ...passed 00:06:39.687 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:39.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.687 EAL: Restoring previous memory policy: 4 00:06:39.687 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.687 EAL: request: mp_malloc_sync 00:06:39.687 EAL: No shared files mode enabled, IPC is disabled 00:06:39.687 EAL: Heap on socket 0 was expanded by 4MB 00:06:39.687 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.687 EAL: request: mp_malloc_sync 00:06:39.687 EAL: No shared files mode enabled, IPC is disabled 00:06:39.687 EAL: Heap on socket 0 was shrunk by 4MB 00:06:39.687 EAL: Trying to obtain current memory policy. 00:06:39.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.687 EAL: Restoring previous memory policy: 4 00:06:39.687 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.687 EAL: request: mp_malloc_sync 00:06:39.687 EAL: No shared files mode enabled, IPC is disabled 00:06:39.687 EAL: Heap on socket 0 was expanded by 6MB 00:06:39.687 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.687 EAL: request: mp_malloc_sync 00:06:39.687 EAL: No shared files mode enabled, IPC is disabled 00:06:39.687 EAL: Heap on socket 0 was shrunk by 6MB 00:06:39.687 EAL: Trying to obtain current memory policy. 00:06:39.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.687 EAL: Restoring previous memory policy: 4 00:06:39.687 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.687 EAL: request: mp_malloc_sync 00:06:39.687 EAL: No shared files mode enabled, IPC is disabled 00:06:39.687 EAL: Heap on socket 0 was expanded by 10MB 00:06:39.687 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.687 EAL: request: mp_malloc_sync 00:06:39.687 EAL: No shared files mode enabled, IPC is disabled 00:06:39.687 EAL: Heap on socket 0 was shrunk by 10MB 00:06:39.687 EAL: Trying to obtain current memory policy. 00:06:39.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.687 EAL: Restoring previous memory policy: 4 00:06:39.687 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.687 EAL: request: mp_malloc_sync 00:06:39.687 EAL: No shared files mode enabled, IPC is disabled 00:06:39.687 EAL: Heap on socket 0 was expanded by 18MB 00:06:39.687 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.687 EAL: request: mp_malloc_sync 00:06:39.687 EAL: No shared files mode enabled, IPC is disabled 00:06:39.687 EAL: Heap on socket 0 was shrunk by 18MB 00:06:39.687 EAL: Trying to obtain current memory policy. 00:06:39.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.687 EAL: Restoring previous memory policy: 4 00:06:39.688 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.688 EAL: request: mp_malloc_sync 00:06:39.688 EAL: No shared files mode enabled, IPC is disabled 00:06:39.688 EAL: Heap on socket 0 was expanded by 34MB 00:06:39.688 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.688 EAL: request: mp_malloc_sync 00:06:39.688 EAL: No shared files mode enabled, IPC is disabled 00:06:39.688 EAL: Heap on socket 0 was shrunk by 34MB 00:06:39.688 EAL: Trying to obtain current memory policy. 00:06:39.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.688 EAL: Restoring previous memory policy: 4 00:06:39.688 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.688 EAL: request: mp_malloc_sync 00:06:39.688 EAL: No shared files mode enabled, IPC is disabled 00:06:39.688 EAL: Heap on socket 0 was expanded by 66MB 00:06:39.688 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.688 EAL: request: mp_malloc_sync 00:06:39.688 EAL: No shared files mode enabled, IPC is disabled 00:06:39.688 EAL: Heap on socket 0 was shrunk by 66MB 00:06:39.688 EAL: Trying to obtain current memory policy. 00:06:39.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.688 EAL: Restoring previous memory policy: 4 00:06:39.688 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.688 EAL: request: mp_malloc_sync 00:06:39.688 EAL: No shared files mode enabled, IPC is disabled 00:06:39.688 EAL: Heap on socket 0 was expanded by 130MB 00:06:39.688 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.688 EAL: request: mp_malloc_sync 00:06:39.688 EAL: No shared files mode enabled, IPC is disabled 00:06:39.688 EAL: Heap on socket 0 was shrunk by 130MB 00:06:39.688 EAL: Trying to obtain current memory policy. 00:06:39.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.947 EAL: Restoring previous memory policy: 4 00:06:39.947 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.947 EAL: request: mp_malloc_sync 00:06:39.947 EAL: No shared files mode enabled, IPC is disabled 00:06:39.947 EAL: Heap on socket 0 was expanded by 258MB 00:06:39.947 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.947 EAL: request: mp_malloc_sync 00:06:39.947 EAL: No shared files mode enabled, IPC is disabled 00:06:39.947 EAL: Heap on socket 0 was shrunk by 258MB 00:06:39.947 EAL: Trying to obtain current memory policy. 00:06:39.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.206 EAL: Restoring previous memory policy: 4 00:06:40.206 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.206 EAL: request: mp_malloc_sync 00:06:40.206 EAL: No shared files mode enabled, IPC is disabled 00:06:40.206 EAL: Heap on socket 0 was expanded by 514MB 00:06:40.206 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.466 EAL: request: mp_malloc_sync 00:06:40.466 EAL: No shared files mode enabled, IPC is disabled 00:06:40.466 EAL: Heap on socket 0 was shrunk by 514MB 00:06:40.466 EAL: Trying to obtain current memory policy. 00:06:40.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:40.745 EAL: Restoring previous memory policy: 4 00:06:40.745 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.745 EAL: request: mp_malloc_sync 00:06:40.745 EAL: No shared files mode enabled, IPC is disabled 00:06:40.745 EAL: Heap on socket 0 was expanded by 1026MB 00:06:41.013 EAL: Calling mem event callback 'spdk:(nil)' 00:06:41.272 EAL: request: mp_malloc_sync 00:06:41.272 EAL: No shared files mode enabled, IPC is disabled 00:06:41.272 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:41.272 passed 00:06:41.272 00:06:41.272 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.272 suites 1 1 n/a 0 0 00:06:41.272 tests 2 2 2 0 0 00:06:41.272 asserts 497 497 497 0 n/a 00:06:41.272 00:06:41.272 Elapsed time = 1.455 seconds 00:06:41.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:41.272 EAL: request: mp_malloc_sync 00:06:41.272 EAL: No shared files mode enabled, IPC is disabled 00:06:41.272 EAL: Heap on socket 0 was shrunk by 2MB 00:06:41.272 EAL: No shared files mode enabled, IPC is disabled 00:06:41.272 EAL: No shared files mode enabled, IPC is disabled 00:06:41.272 EAL: No shared files mode enabled, IPC is disabled 00:06:41.272 00:06:41.272 real 0m1.593s 00:06:41.272 user 0m0.906s 00:06:41.272 sys 0m0.646s 00:06:41.272 23:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.272 23:20:01 -- common/autotest_common.sh@10 -- # set +x 00:06:41.272 ************************************ 00:06:41.272 END TEST env_vtophys 00:06:41.272 ************************************ 00:06:41.272 23:20:02 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:41.272 23:20:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.272 23:20:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.272 23:20:02 -- common/autotest_common.sh@10 -- # set +x 00:06:41.272 ************************************ 00:06:41.272 START TEST env_pci 00:06:41.272 ************************************ 00:06:41.272 23:20:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:41.272 00:06:41.272 00:06:41.272 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.272 http://cunit.sourceforge.net/ 00:06:41.272 00:06:41.272 00:06:41.272 Suite: pci 00:06:41.272 Test: pci_hook ...[2024-07-11 23:20:02.025870] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 128139 has claimed it 00:06:41.272 EAL: Cannot find device (10000:00:01.0) 00:06:41.272 EAL: Failed to attach device on primary process 00:06:41.272 passed 00:06:41.272 00:06:41.272 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.272 suites 1 1 n/a 0 0 00:06:41.272 tests 1 1 1 0 0 00:06:41.272 asserts 25 25 25 0 n/a 00:06:41.272 00:06:41.272 Elapsed time = 0.045 seconds 00:06:41.272 00:06:41.272 real 0m0.069s 00:06:41.272 user 0m0.019s 00:06:41.272 sys 0m0.049s 00:06:41.272 23:20:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.272 23:20:02 -- common/autotest_common.sh@10 -- # set +x 00:06:41.272 ************************************ 00:06:41.272 END TEST env_pci 00:06:41.272 ************************************ 00:06:41.272 23:20:02 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:41.272 23:20:02 -- env/env.sh@15 -- # uname 00:06:41.272 23:20:02 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:41.272 23:20:02 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:41.272 23:20:02 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:41.272 23:20:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:41.272 23:20:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.272 23:20:02 -- common/autotest_common.sh@10 -- # set +x 00:06:41.272 ************************************ 00:06:41.272 START TEST env_dpdk_post_init 00:06:41.272 ************************************ 00:06:41.272 23:20:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:41.272 EAL: Detected CPU lcores: 48 00:06:41.272 EAL: Detected NUMA nodes: 2 00:06:41.272 EAL: Detected shared linkage of DPDK 00:06:41.272 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:41.272 EAL: Selected IOVA mode 'VA' 00:06:41.272 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.272 EAL: VFIO support initialized 00:06:41.272 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:41.532 EAL: Using IOMMU type 1 (Type 1) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:41.532 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:42.469 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:06:45.760 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:06:45.760 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:06:45.760 Starting DPDK initialization... 00:06:45.760 Starting SPDK post initialization... 00:06:45.760 SPDK NVMe probe 00:06:45.760 Attaching to 0000:82:00.0 00:06:45.760 Attached to 0000:82:00.0 00:06:45.760 Cleaning up... 00:06:45.760 00:06:45.760 real 0m4.419s 00:06:45.760 user 0m3.286s 00:06:45.760 sys 0m0.191s 00:06:45.760 23:20:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.760 23:20:06 -- common/autotest_common.sh@10 -- # set +x 00:06:45.760 ************************************ 00:06:45.760 END TEST env_dpdk_post_init 00:06:45.760 ************************************ 00:06:45.760 23:20:06 -- env/env.sh@26 -- # uname 00:06:45.760 23:20:06 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:45.760 23:20:06 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:45.760 23:20:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.760 23:20:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.760 23:20:06 -- common/autotest_common.sh@10 -- # set +x 00:06:45.760 ************************************ 00:06:45.760 START TEST env_mem_callbacks 00:06:45.760 ************************************ 00:06:45.760 23:20:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:45.760 EAL: Detected CPU lcores: 48 00:06:45.760 EAL: Detected NUMA nodes: 2 00:06:45.760 EAL: Detected shared linkage of DPDK 00:06:45.760 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:45.760 EAL: Selected IOVA mode 'VA' 00:06:45.760 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.760 EAL: VFIO support initialized 00:06:45.760 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:45.760 00:06:45.760 00:06:45.760 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.760 http://cunit.sourceforge.net/ 00:06:45.760 00:06:45.760 00:06:45.760 Suite: memory 00:06:45.760 Test: test ... 00:06:45.760 register 0x200000200000 2097152 00:06:45.760 malloc 3145728 00:06:45.760 register 0x200000400000 4194304 00:06:45.760 buf 0x200000500000 len 3145728 PASSED 00:06:45.760 malloc 64 00:06:45.760 buf 0x2000004fff40 len 64 PASSED 00:06:45.760 malloc 4194304 00:06:45.760 register 0x200000800000 6291456 00:06:45.760 buf 0x200000a00000 len 4194304 PASSED 00:06:45.760 free 0x200000500000 3145728 00:06:45.760 free 0x2000004fff40 64 00:06:45.760 unregister 0x200000400000 4194304 PASSED 00:06:45.760 free 0x200000a00000 4194304 00:06:45.760 unregister 0x200000800000 6291456 PASSED 00:06:45.760 malloc 8388608 00:06:45.760 register 0x200000400000 10485760 00:06:45.760 buf 0x200000600000 len 8388608 PASSED 00:06:45.760 free 0x200000600000 8388608 00:06:45.760 unregister 0x200000400000 10485760 PASSED 00:06:45.760 passed 00:06:45.760 00:06:45.760 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.760 suites 1 1 n/a 0 0 00:06:45.760 tests 1 1 1 0 0 00:06:45.760 asserts 15 15 15 0 n/a 00:06:45.760 00:06:45.760 Elapsed time = 0.005 seconds 00:06:45.760 00:06:45.760 real 0m0.058s 00:06:45.760 user 0m0.016s 00:06:45.760 sys 0m0.041s 00:06:45.760 23:20:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.760 23:20:06 -- common/autotest_common.sh@10 -- # set +x 00:06:45.760 ************************************ 00:06:45.760 END TEST env_mem_callbacks 00:06:45.760 ************************************ 00:06:45.760 00:06:45.760 real 0m6.588s 00:06:45.760 user 0m4.541s 00:06:45.760 sys 0m1.090s 00:06:45.760 23:20:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.760 23:20:06 -- common/autotest_common.sh@10 -- # set +x 00:06:45.760 ************************************ 00:06:45.761 END TEST env 00:06:45.761 ************************************ 00:06:45.761 23:20:06 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:45.761 23:20:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.761 23:20:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.761 23:20:06 -- common/autotest_common.sh@10 -- # set +x 00:06:45.761 ************************************ 00:06:45.761 START TEST rpc 00:06:45.761 ************************************ 00:06:45.761 23:20:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:46.019 * Looking for test storage... 00:06:46.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:46.019 23:20:06 -- rpc/rpc.sh@65 -- # spdk_pid=128799 00:06:46.019 23:20:06 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:46.019 23:20:06 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.019 23:20:06 -- rpc/rpc.sh@67 -- # waitforlisten 128799 00:06:46.019 23:20:06 -- common/autotest_common.sh@819 -- # '[' -z 128799 ']' 00:06:46.019 23:20:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.019 23:20:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:46.019 23:20:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.019 23:20:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:46.019 23:20:06 -- common/autotest_common.sh@10 -- # set +x 00:06:46.019 [2024-07-11 23:20:06.824516] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:46.019 [2024-07-11 23:20:06.824695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128799 ] 00:06:46.019 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.019 [2024-07-11 23:20:06.920433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.277 [2024-07-11 23:20:07.012327] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:46.277 [2024-07-11 23:20:07.012504] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:46.277 [2024-07-11 23:20:07.012523] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 128799' to capture a snapshot of events at runtime. 00:06:46.277 [2024-07-11 23:20:07.012537] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid128799 for offline analysis/debug. 00:06:46.277 [2024-07-11 23:20:07.012576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.217 23:20:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:47.217 23:20:07 -- common/autotest_common.sh@852 -- # return 0 00:06:47.217 23:20:07 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:47.217 23:20:07 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:47.217 23:20:07 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:47.217 23:20:07 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:47.217 23:20:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.217 23:20:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.217 23:20:07 -- common/autotest_common.sh@10 -- # set +x 00:06:47.217 ************************************ 00:06:47.217 START TEST rpc_integrity 00:06:47.217 ************************************ 00:06:47.217 23:20:07 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:06:47.217 23:20:07 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:47.217 23:20:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.217 23:20:07 -- common/autotest_common.sh@10 -- # set +x 00:06:47.217 23:20:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.217 23:20:07 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:47.217 23:20:07 -- rpc/rpc.sh@13 -- # jq length 00:06:47.217 23:20:07 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:47.217 23:20:07 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:47.217 23:20:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.217 23:20:07 -- common/autotest_common.sh@10 -- # set +x 00:06:47.217 23:20:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.217 23:20:07 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:47.217 23:20:07 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:47.217 23:20:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.217 23:20:07 -- common/autotest_common.sh@10 -- # set +x 00:06:47.217 23:20:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.217 23:20:07 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:47.217 { 00:06:47.217 "name": "Malloc0", 00:06:47.217 "aliases": [ 00:06:47.217 "a2004b5c-be73-46a6-9b61-03d6dd395a00" 00:06:47.217 ], 00:06:47.217 "product_name": "Malloc disk", 00:06:47.217 "block_size": 512, 00:06:47.217 "num_blocks": 16384, 00:06:47.217 "uuid": "a2004b5c-be73-46a6-9b61-03d6dd395a00", 00:06:47.217 "assigned_rate_limits": { 00:06:47.217 "rw_ios_per_sec": 0, 00:06:47.217 "rw_mbytes_per_sec": 0, 00:06:47.217 "r_mbytes_per_sec": 0, 00:06:47.217 "w_mbytes_per_sec": 0 00:06:47.217 }, 00:06:47.217 "claimed": false, 00:06:47.217 "zoned": false, 00:06:47.217 "supported_io_types": { 00:06:47.217 "read": true, 00:06:47.217 "write": true, 00:06:47.217 "unmap": true, 00:06:47.217 "write_zeroes": true, 00:06:47.217 "flush": true, 00:06:47.217 "reset": true, 00:06:47.217 "compare": false, 00:06:47.217 "compare_and_write": false, 00:06:47.217 "abort": true, 00:06:47.217 "nvme_admin": false, 00:06:47.217 "nvme_io": false 00:06:47.217 }, 00:06:47.217 "memory_domains": [ 00:06:47.217 { 00:06:47.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.217 "dma_device_type": 2 00:06:47.217 } 00:06:47.217 ], 00:06:47.217 "driver_specific": {} 00:06:47.217 } 00:06:47.217 ]' 00:06:47.217 23:20:07 -- rpc/rpc.sh@17 -- # jq length 00:06:47.217 23:20:07 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:47.217 23:20:07 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:47.217 23:20:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.217 23:20:07 -- common/autotest_common.sh@10 -- # set +x 00:06:47.217 [2024-07-11 23:20:07.976241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:47.217 [2024-07-11 23:20:07.976289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.217 [2024-07-11 23:20:07.976314] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2438ab0 00:06:47.217 [2024-07-11 23:20:07.976329] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.217 [2024-07-11 23:20:07.977755] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.217 [2024-07-11 23:20:07.977783] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:47.217 Passthru0 00:06:47.217 23:20:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.217 23:20:07 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:47.217 23:20:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.217 23:20:07 -- common/autotest_common.sh@10 -- # set +x 00:06:47.217 23:20:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.217 23:20:07 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:47.217 { 00:06:47.217 "name": "Malloc0", 00:06:47.217 "aliases": [ 00:06:47.217 "a2004b5c-be73-46a6-9b61-03d6dd395a00" 00:06:47.217 ], 00:06:47.217 "product_name": "Malloc disk", 00:06:47.217 "block_size": 512, 00:06:47.217 "num_blocks": 16384, 00:06:47.217 "uuid": "a2004b5c-be73-46a6-9b61-03d6dd395a00", 00:06:47.217 "assigned_rate_limits": { 00:06:47.217 "rw_ios_per_sec": 0, 00:06:47.217 "rw_mbytes_per_sec": 0, 00:06:47.217 "r_mbytes_per_sec": 0, 00:06:47.217 "w_mbytes_per_sec": 0 00:06:47.217 }, 00:06:47.217 "claimed": true, 00:06:47.217 "claim_type": "exclusive_write", 00:06:47.217 "zoned": false, 00:06:47.217 "supported_io_types": { 00:06:47.217 "read": true, 00:06:47.217 "write": true, 00:06:47.217 "unmap": true, 00:06:47.217 "write_zeroes": true, 00:06:47.217 "flush": true, 00:06:47.217 "reset": true, 00:06:47.217 "compare": false, 00:06:47.217 "compare_and_write": false, 00:06:47.217 "abort": true, 00:06:47.217 "nvme_admin": false, 00:06:47.217 "nvme_io": false 00:06:47.217 }, 00:06:47.217 "memory_domains": [ 00:06:47.217 { 00:06:47.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.217 "dma_device_type": 2 00:06:47.217 } 00:06:47.217 ], 00:06:47.217 "driver_specific": {} 00:06:47.217 }, 00:06:47.217 { 00:06:47.217 "name": "Passthru0", 00:06:47.217 "aliases": [ 00:06:47.217 "93e42871-98e8-5460-9d8d-0b17bb7f60eb" 00:06:47.217 ], 00:06:47.217 "product_name": "passthru", 00:06:47.217 "block_size": 512, 00:06:47.217 "num_blocks": 16384, 00:06:47.217 "uuid": "93e42871-98e8-5460-9d8d-0b17bb7f60eb", 00:06:47.217 "assigned_rate_limits": { 00:06:47.217 "rw_ios_per_sec": 0, 00:06:47.217 "rw_mbytes_per_sec": 0, 00:06:47.217 "r_mbytes_per_sec": 0, 00:06:47.218 "w_mbytes_per_sec": 0 00:06:47.218 }, 00:06:47.218 "claimed": false, 00:06:47.218 "zoned": false, 00:06:47.218 "supported_io_types": { 00:06:47.218 "read": true, 00:06:47.218 "write": true, 00:06:47.218 "unmap": true, 00:06:47.218 "write_zeroes": true, 00:06:47.218 "flush": true, 00:06:47.218 "reset": true, 00:06:47.218 "compare": false, 00:06:47.218 "compare_and_write": false, 00:06:47.218 "abort": true, 00:06:47.218 "nvme_admin": false, 00:06:47.218 "nvme_io": false 00:06:47.218 }, 00:06:47.218 "memory_domains": [ 00:06:47.218 { 00:06:47.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.218 "dma_device_type": 2 00:06:47.218 } 00:06:47.218 ], 00:06:47.218 "driver_specific": { 00:06:47.218 "passthru": { 00:06:47.218 "name": "Passthru0", 00:06:47.218 "base_bdev_name": "Malloc0" 00:06:47.218 } 00:06:47.218 } 00:06:47.218 } 00:06:47.218 ]' 00:06:47.218 23:20:07 -- rpc/rpc.sh@21 -- # jq length 00:06:47.218 23:20:08 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:47.218 23:20:08 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:47.218 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.218 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.218 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.218 23:20:08 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:47.218 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.218 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.218 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.218 23:20:08 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:47.218 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.218 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.218 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.218 23:20:08 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:47.218 23:20:08 -- rpc/rpc.sh@26 -- # jq length 00:06:47.218 23:20:08 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:47.218 00:06:47.218 real 0m0.229s 00:06:47.218 user 0m0.153s 00:06:47.218 sys 0m0.023s 00:06:47.218 23:20:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.218 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.218 ************************************ 00:06:47.218 END TEST rpc_integrity 00:06:47.218 ************************************ 00:06:47.218 23:20:08 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:47.218 23:20:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.218 23:20:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.218 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.218 ************************************ 00:06:47.218 START TEST rpc_plugins 00:06:47.218 ************************************ 00:06:47.218 23:20:08 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:06:47.218 23:20:08 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:47.218 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.218 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.218 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.218 23:20:08 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:47.218 23:20:08 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:47.218 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.218 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.218 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.218 23:20:08 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:47.218 { 00:06:47.218 "name": "Malloc1", 00:06:47.218 "aliases": [ 00:06:47.218 "99f34818-a7e7-4163-9e37-78de6f374589" 00:06:47.218 ], 00:06:47.218 "product_name": "Malloc disk", 00:06:47.218 "block_size": 4096, 00:06:47.218 "num_blocks": 256, 00:06:47.218 "uuid": "99f34818-a7e7-4163-9e37-78de6f374589", 00:06:47.218 "assigned_rate_limits": { 00:06:47.218 "rw_ios_per_sec": 0, 00:06:47.218 "rw_mbytes_per_sec": 0, 00:06:47.218 "r_mbytes_per_sec": 0, 00:06:47.218 "w_mbytes_per_sec": 0 00:06:47.218 }, 00:06:47.218 "claimed": false, 00:06:47.218 "zoned": false, 00:06:47.218 "supported_io_types": { 00:06:47.218 "read": true, 00:06:47.218 "write": true, 00:06:47.218 "unmap": true, 00:06:47.218 "write_zeroes": true, 00:06:47.218 "flush": true, 00:06:47.218 "reset": true, 00:06:47.218 "compare": false, 00:06:47.218 "compare_and_write": false, 00:06:47.218 "abort": true, 00:06:47.218 "nvme_admin": false, 00:06:47.218 "nvme_io": false 00:06:47.218 }, 00:06:47.218 "memory_domains": [ 00:06:47.218 { 00:06:47.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.218 "dma_device_type": 2 00:06:47.218 } 00:06:47.218 ], 00:06:47.218 "driver_specific": {} 00:06:47.218 } 00:06:47.218 ]' 00:06:47.218 23:20:08 -- rpc/rpc.sh@32 -- # jq length 00:06:47.478 23:20:08 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:47.478 23:20:08 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:47.478 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.478 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.478 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.478 23:20:08 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:47.478 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.478 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.478 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.478 23:20:08 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:47.478 23:20:08 -- rpc/rpc.sh@36 -- # jq length 00:06:47.478 23:20:08 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:47.478 00:06:47.478 real 0m0.114s 00:06:47.478 user 0m0.082s 00:06:47.478 sys 0m0.005s 00:06:47.478 23:20:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.478 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.478 ************************************ 00:06:47.478 END TEST rpc_plugins 00:06:47.478 ************************************ 00:06:47.478 23:20:08 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:47.478 23:20:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.478 23:20:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.478 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.478 ************************************ 00:06:47.478 START TEST rpc_trace_cmd_test 00:06:47.478 ************************************ 00:06:47.478 23:20:08 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:06:47.478 23:20:08 -- rpc/rpc.sh@40 -- # local info 00:06:47.478 23:20:08 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:47.478 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.478 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.478 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.478 23:20:08 -- rpc/rpc.sh@42 -- # info='{ 00:06:47.478 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid128799", 00:06:47.478 "tpoint_group_mask": "0x8", 00:06:47.478 "iscsi_conn": { 00:06:47.478 "mask": "0x2", 00:06:47.478 "tpoint_mask": "0x0" 00:06:47.478 }, 00:06:47.478 "scsi": { 00:06:47.478 "mask": "0x4", 00:06:47.478 "tpoint_mask": "0x0" 00:06:47.478 }, 00:06:47.478 "bdev": { 00:06:47.478 "mask": "0x8", 00:06:47.478 "tpoint_mask": "0xffffffffffffffff" 00:06:47.478 }, 00:06:47.478 "nvmf_rdma": { 00:06:47.478 "mask": "0x10", 00:06:47.478 "tpoint_mask": "0x0" 00:06:47.478 }, 00:06:47.478 "nvmf_tcp": { 00:06:47.478 "mask": "0x20", 00:06:47.478 "tpoint_mask": "0x0" 00:06:47.478 }, 00:06:47.478 "ftl": { 00:06:47.478 "mask": "0x40", 00:06:47.478 "tpoint_mask": "0x0" 00:06:47.478 }, 00:06:47.478 "blobfs": { 00:06:47.478 "mask": "0x80", 00:06:47.478 "tpoint_mask": "0x0" 00:06:47.478 }, 00:06:47.478 "dsa": { 00:06:47.478 "mask": "0x200", 00:06:47.478 "tpoint_mask": "0x0" 00:06:47.478 }, 00:06:47.478 "thread": { 00:06:47.478 "mask": "0x400", 00:06:47.478 "tpoint_mask": "0x0" 00:06:47.478 }, 00:06:47.478 "nvme_pcie": { 00:06:47.478 "mask": "0x800", 00:06:47.478 "tpoint_mask": "0x0" 00:06:47.478 }, 00:06:47.478 "iaa": { 00:06:47.478 "mask": "0x1000", 00:06:47.478 "tpoint_mask": "0x0" 00:06:47.478 }, 00:06:47.478 "nvme_tcp": { 00:06:47.478 "mask": "0x2000", 00:06:47.478 "tpoint_mask": "0x0" 00:06:47.478 }, 00:06:47.478 "bdev_nvme": { 00:06:47.478 "mask": "0x4000", 00:06:47.478 "tpoint_mask": "0x0" 00:06:47.478 } 00:06:47.478 }' 00:06:47.478 23:20:08 -- rpc/rpc.sh@43 -- # jq length 00:06:47.478 23:20:08 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:06:47.478 23:20:08 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:47.478 23:20:08 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:47.478 23:20:08 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:47.739 23:20:08 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:47.739 23:20:08 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:47.739 23:20:08 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:47.739 23:20:08 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:47.739 23:20:08 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:47.739 00:06:47.739 real 0m0.240s 00:06:47.739 user 0m0.217s 00:06:47.739 sys 0m0.016s 00:06:47.739 23:20:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.739 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.739 ************************************ 00:06:47.739 END TEST rpc_trace_cmd_test 00:06:47.739 ************************************ 00:06:47.739 23:20:08 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:47.739 23:20:08 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:47.739 23:20:08 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:47.739 23:20:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.739 23:20:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.739 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.739 ************************************ 00:06:47.739 START TEST rpc_daemon_integrity 00:06:47.739 ************************************ 00:06:47.739 23:20:08 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:06:47.739 23:20:08 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:47.739 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.739 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.739 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.739 23:20:08 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:47.739 23:20:08 -- rpc/rpc.sh@13 -- # jq length 00:06:47.739 23:20:08 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:47.739 23:20:08 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:47.739 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.739 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.739 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.739 23:20:08 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:47.739 23:20:08 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:47.739 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.739 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.739 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.739 23:20:08 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:47.739 { 00:06:47.739 "name": "Malloc2", 00:06:47.739 "aliases": [ 00:06:47.739 "903269bc-93ae-47e4-a7d3-67ac1caabd6f" 00:06:47.739 ], 00:06:47.739 "product_name": "Malloc disk", 00:06:47.739 "block_size": 512, 00:06:47.739 "num_blocks": 16384, 00:06:47.739 "uuid": "903269bc-93ae-47e4-a7d3-67ac1caabd6f", 00:06:47.739 "assigned_rate_limits": { 00:06:47.739 "rw_ios_per_sec": 0, 00:06:47.739 "rw_mbytes_per_sec": 0, 00:06:47.739 "r_mbytes_per_sec": 0, 00:06:47.739 "w_mbytes_per_sec": 0 00:06:47.739 }, 00:06:47.739 "claimed": false, 00:06:47.739 "zoned": false, 00:06:47.739 "supported_io_types": { 00:06:47.739 "read": true, 00:06:47.739 "write": true, 00:06:47.739 "unmap": true, 00:06:47.739 "write_zeroes": true, 00:06:47.739 "flush": true, 00:06:47.739 "reset": true, 00:06:47.739 "compare": false, 00:06:47.739 "compare_and_write": false, 00:06:47.739 "abort": true, 00:06:47.739 "nvme_admin": false, 00:06:47.739 "nvme_io": false 00:06:47.739 }, 00:06:47.739 "memory_domains": [ 00:06:47.739 { 00:06:47.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.739 "dma_device_type": 2 00:06:47.739 } 00:06:47.739 ], 00:06:47.739 "driver_specific": {} 00:06:47.739 } 00:06:47.739 ]' 00:06:47.739 23:20:08 -- rpc/rpc.sh@17 -- # jq length 00:06:47.739 23:20:08 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:47.739 23:20:08 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:47.739 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.739 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.739 [2024-07-11 23:20:08.642417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:47.739 [2024-07-11 23:20:08.642464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.739 [2024-07-11 23:20:08.642490] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x243c680 00:06:47.739 [2024-07-11 23:20:08.642505] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.739 [2024-07-11 23:20:08.643826] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.739 [2024-07-11 23:20:08.643854] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:47.739 Passthru0 00:06:47.739 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.739 23:20:08 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:47.739 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.739 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.739 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.739 23:20:08 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:47.739 { 00:06:47.739 "name": "Malloc2", 00:06:47.739 "aliases": [ 00:06:47.739 "903269bc-93ae-47e4-a7d3-67ac1caabd6f" 00:06:47.739 ], 00:06:47.739 "product_name": "Malloc disk", 00:06:47.739 "block_size": 512, 00:06:47.739 "num_blocks": 16384, 00:06:47.739 "uuid": "903269bc-93ae-47e4-a7d3-67ac1caabd6f", 00:06:47.739 "assigned_rate_limits": { 00:06:47.739 "rw_ios_per_sec": 0, 00:06:47.739 "rw_mbytes_per_sec": 0, 00:06:47.739 "r_mbytes_per_sec": 0, 00:06:47.739 "w_mbytes_per_sec": 0 00:06:47.739 }, 00:06:47.739 "claimed": true, 00:06:47.739 "claim_type": "exclusive_write", 00:06:47.739 "zoned": false, 00:06:47.739 "supported_io_types": { 00:06:47.739 "read": true, 00:06:47.739 "write": true, 00:06:47.739 "unmap": true, 00:06:47.739 "write_zeroes": true, 00:06:47.739 "flush": true, 00:06:47.739 "reset": true, 00:06:47.739 "compare": false, 00:06:47.739 "compare_and_write": false, 00:06:47.739 "abort": true, 00:06:47.739 "nvme_admin": false, 00:06:47.739 "nvme_io": false 00:06:47.739 }, 00:06:47.739 "memory_domains": [ 00:06:47.739 { 00:06:47.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.739 "dma_device_type": 2 00:06:47.739 } 00:06:47.739 ], 00:06:47.739 "driver_specific": {} 00:06:47.739 }, 00:06:47.739 { 00:06:47.739 "name": "Passthru0", 00:06:47.739 "aliases": [ 00:06:47.739 "bdb0039e-adc6-55e5-83b8-47b9be5f64e6" 00:06:47.739 ], 00:06:47.739 "product_name": "passthru", 00:06:47.739 "block_size": 512, 00:06:47.739 "num_blocks": 16384, 00:06:47.739 "uuid": "bdb0039e-adc6-55e5-83b8-47b9be5f64e6", 00:06:47.739 "assigned_rate_limits": { 00:06:47.739 "rw_ios_per_sec": 0, 00:06:47.739 "rw_mbytes_per_sec": 0, 00:06:47.739 "r_mbytes_per_sec": 0, 00:06:47.739 "w_mbytes_per_sec": 0 00:06:47.739 }, 00:06:47.739 "claimed": false, 00:06:47.739 "zoned": false, 00:06:47.739 "supported_io_types": { 00:06:47.739 "read": true, 00:06:47.739 "write": true, 00:06:47.739 "unmap": true, 00:06:47.739 "write_zeroes": true, 00:06:47.739 "flush": true, 00:06:47.739 "reset": true, 00:06:47.739 "compare": false, 00:06:47.739 "compare_and_write": false, 00:06:47.739 "abort": true, 00:06:47.739 "nvme_admin": false, 00:06:47.739 "nvme_io": false 00:06:47.739 }, 00:06:47.739 "memory_domains": [ 00:06:47.739 { 00:06:47.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.739 "dma_device_type": 2 00:06:47.739 } 00:06:47.739 ], 00:06:47.739 "driver_specific": { 00:06:47.739 "passthru": { 00:06:47.739 "name": "Passthru0", 00:06:47.739 "base_bdev_name": "Malloc2" 00:06:47.739 } 00:06:47.739 } 00:06:47.739 } 00:06:47.739 ]' 00:06:47.739 23:20:08 -- rpc/rpc.sh@21 -- # jq length 00:06:47.999 23:20:08 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:47.999 23:20:08 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:47.999 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.999 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.999 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.999 23:20:08 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:47.999 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.999 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.999 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.999 23:20:08 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:47.999 23:20:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.999 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.999 23:20:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.999 23:20:08 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:47.999 23:20:08 -- rpc/rpc.sh@26 -- # jq length 00:06:47.999 23:20:08 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:47.999 00:06:47.999 real 0m0.227s 00:06:47.999 user 0m0.154s 00:06:47.999 sys 0m0.020s 00:06:47.999 23:20:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.999 23:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.999 ************************************ 00:06:47.999 END TEST rpc_daemon_integrity 00:06:47.999 ************************************ 00:06:47.999 23:20:08 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:47.999 23:20:08 -- rpc/rpc.sh@84 -- # killprocess 128799 00:06:47.999 23:20:08 -- common/autotest_common.sh@926 -- # '[' -z 128799 ']' 00:06:47.999 23:20:08 -- common/autotest_common.sh@930 -- # kill -0 128799 00:06:47.999 23:20:08 -- common/autotest_common.sh@931 -- # uname 00:06:47.999 23:20:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:47.999 23:20:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128799 00:06:47.999 23:20:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:47.999 23:20:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:47.999 23:20:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128799' 00:06:47.999 killing process with pid 128799 00:06:47.999 23:20:08 -- common/autotest_common.sh@945 -- # kill 128799 00:06:47.999 23:20:08 -- common/autotest_common.sh@950 -- # wait 128799 00:06:48.568 00:06:48.568 real 0m2.583s 00:06:48.568 user 0m3.333s 00:06:48.568 sys 0m0.684s 00:06:48.568 23:20:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.568 23:20:09 -- common/autotest_common.sh@10 -- # set +x 00:06:48.568 ************************************ 00:06:48.568 END TEST rpc 00:06:48.568 ************************************ 00:06:48.568 23:20:09 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:48.568 23:20:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.568 23:20:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.568 23:20:09 -- common/autotest_common.sh@10 -- # set +x 00:06:48.568 ************************************ 00:06:48.568 START TEST rpc_client 00:06:48.568 ************************************ 00:06:48.568 23:20:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:48.568 * Looking for test storage... 00:06:48.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:48.568 23:20:09 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:48.568 OK 00:06:48.568 23:20:09 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:48.568 00:06:48.568 real 0m0.101s 00:06:48.568 user 0m0.041s 00:06:48.568 sys 0m0.066s 00:06:48.568 23:20:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.568 23:20:09 -- common/autotest_common.sh@10 -- # set +x 00:06:48.568 ************************************ 00:06:48.568 END TEST rpc_client 00:06:48.568 ************************************ 00:06:48.568 23:20:09 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:48.568 23:20:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.568 23:20:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.568 23:20:09 -- common/autotest_common.sh@10 -- # set +x 00:06:48.568 ************************************ 00:06:48.568 START TEST json_config 00:06:48.568 ************************************ 00:06:48.568 23:20:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:48.568 23:20:09 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.568 23:20:09 -- nvmf/common.sh@7 -- # uname -s 00:06:48.568 23:20:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.568 23:20:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.568 23:20:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.568 23:20:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.568 23:20:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.568 23:20:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.568 23:20:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.568 23:20:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.568 23:20:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.568 23:20:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.568 23:20:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:48.568 23:20:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:48.568 23:20:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.568 23:20:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.568 23:20:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:48.568 23:20:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.568 23:20:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.568 23:20:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.568 23:20:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.569 23:20:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.569 23:20:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.569 23:20:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.569 23:20:09 -- paths/export.sh@5 -- # export PATH 00:06:48.569 23:20:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.569 23:20:09 -- nvmf/common.sh@46 -- # : 0 00:06:48.569 23:20:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:48.569 23:20:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:48.569 23:20:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:48.569 23:20:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.569 23:20:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.569 23:20:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:48.569 23:20:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:48.569 23:20:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:48.569 23:20:09 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:48.569 23:20:09 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:48.569 23:20:09 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:48.569 23:20:09 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:48.569 23:20:09 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:06:48.569 23:20:09 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:06:48.569 23:20:09 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:48.569 23:20:09 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:06:48.569 23:20:09 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:48.569 23:20:09 -- json_config/json_config.sh@32 -- # declare -A app_params 00:06:48.569 23:20:09 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:48.569 23:20:09 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:06:48.569 23:20:09 -- json_config/json_config.sh@43 -- # last_event_id=0 00:06:48.569 23:20:09 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:48.569 23:20:09 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:06:48.569 INFO: JSON configuration test init 00:06:48.569 23:20:09 -- json_config/json_config.sh@420 -- # json_config_test_init 00:06:48.569 23:20:09 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:06:48.569 23:20:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:48.569 23:20:09 -- common/autotest_common.sh@10 -- # set +x 00:06:48.569 23:20:09 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:06:48.569 23:20:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:48.569 23:20:09 -- common/autotest_common.sh@10 -- # set +x 00:06:48.569 23:20:09 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:06:48.569 23:20:09 -- json_config/json_config.sh@98 -- # local app=target 00:06:48.569 23:20:09 -- json_config/json_config.sh@99 -- # shift 00:06:48.569 23:20:09 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:48.569 23:20:09 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:48.569 23:20:09 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:48.569 23:20:09 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:48.569 23:20:09 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:48.569 23:20:09 -- json_config/json_config.sh@111 -- # app_pid[$app]=129303 00:06:48.569 23:20:09 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:48.569 23:20:09 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:48.569 Waiting for target to run... 00:06:48.569 23:20:09 -- json_config/json_config.sh@114 -- # waitforlisten 129303 /var/tmp/spdk_tgt.sock 00:06:48.569 23:20:09 -- common/autotest_common.sh@819 -- # '[' -z 129303 ']' 00:06:48.569 23:20:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:48.569 23:20:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:48.569 23:20:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:48.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:48.569 23:20:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:48.569 23:20:09 -- common/autotest_common.sh@10 -- # set +x 00:06:48.829 [2024-07-11 23:20:09.575198] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:48.829 [2024-07-11 23:20:09.575296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129303 ] 00:06:48.829 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.396 [2024-07-11 23:20:10.120944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.396 [2024-07-11 23:20:10.183248] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:49.396 [2024-07-11 23:20:10.183440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.961 23:20:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:49.961 23:20:10 -- common/autotest_common.sh@852 -- # return 0 00:06:49.961 23:20:10 -- json_config/json_config.sh@115 -- # echo '' 00:06:49.961 00:06:49.962 23:20:10 -- json_config/json_config.sh@322 -- # create_accel_config 00:06:49.962 23:20:10 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:06:49.962 23:20:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:49.962 23:20:10 -- common/autotest_common.sh@10 -- # set +x 00:06:49.962 23:20:10 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:06:49.962 23:20:10 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:06:49.962 23:20:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:49.962 23:20:10 -- common/autotest_common.sh@10 -- # set +x 00:06:49.962 23:20:10 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:49.962 23:20:10 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:06:49.962 23:20:10 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:53.243 23:20:14 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:06:53.243 23:20:14 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:06:53.243 23:20:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:53.243 23:20:14 -- common/autotest_common.sh@10 -- # set +x 00:06:53.243 23:20:14 -- json_config/json_config.sh@48 -- # local ret=0 00:06:53.243 23:20:14 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:53.243 23:20:14 -- json_config/json_config.sh@49 -- # local enabled_types 00:06:53.243 23:20:14 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:53.243 23:20:14 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:53.243 23:20:14 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:53.501 23:20:14 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:53.501 23:20:14 -- json_config/json_config.sh@51 -- # local get_types 00:06:53.501 23:20:14 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:53.501 23:20:14 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:06:53.501 23:20:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:53.501 23:20:14 -- common/autotest_common.sh@10 -- # set +x 00:06:53.501 23:20:14 -- json_config/json_config.sh@58 -- # return 0 00:06:53.501 23:20:14 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:06:53.501 23:20:14 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:06:53.501 23:20:14 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:06:53.501 23:20:14 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:06:53.501 23:20:14 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:06:53.501 23:20:14 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:06:53.501 23:20:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:53.501 23:20:14 -- common/autotest_common.sh@10 -- # set +x 00:06:53.501 23:20:14 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:53.501 23:20:14 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:06:53.501 23:20:14 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:06:53.501 23:20:14 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:53.501 23:20:14 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:54.067 MallocForNvmf0 00:06:54.067 23:20:14 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:54.067 23:20:14 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:54.067 MallocForNvmf1 00:06:54.067 23:20:15 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:54.067 23:20:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:54.634 [2024-07-11 23:20:15.489607] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.634 23:20:15 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:54.634 23:20:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:54.891 23:20:15 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:54.891 23:20:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:55.457 23:20:16 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:55.457 23:20:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:55.716 23:20:16 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:55.716 23:20:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:55.974 [2024-07-11 23:20:16.765765] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:55.974 23:20:16 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:06:55.974 23:20:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:55.974 23:20:16 -- common/autotest_common.sh@10 -- # set +x 00:06:55.974 23:20:16 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:06:55.974 23:20:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:55.974 23:20:16 -- common/autotest_common.sh@10 -- # set +x 00:06:55.974 23:20:16 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:06:55.974 23:20:16 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:55.974 23:20:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:56.231 MallocBdevForConfigChangeCheck 00:06:56.231 23:20:17 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:56.231 23:20:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:56.231 23:20:17 -- common/autotest_common.sh@10 -- # set +x 00:06:56.231 23:20:17 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:56.231 23:20:17 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:56.796 23:20:17 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:56.796 INFO: shutting down applications... 00:06:56.796 23:20:17 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:56.796 23:20:17 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:56.796 23:20:17 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:56.796 23:20:17 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:58.692 Calling clear_iscsi_subsystem 00:06:58.692 Calling clear_nvmf_subsystem 00:06:58.692 Calling clear_nbd_subsystem 00:06:58.692 Calling clear_ublk_subsystem 00:06:58.692 Calling clear_vhost_blk_subsystem 00:06:58.692 Calling clear_vhost_scsi_subsystem 00:06:58.692 Calling clear_scheduler_subsystem 00:06:58.692 Calling clear_bdev_subsystem 00:06:58.692 Calling clear_accel_subsystem 00:06:58.692 Calling clear_vmd_subsystem 00:06:58.692 Calling clear_sock_subsystem 00:06:58.692 Calling clear_iobuf_subsystem 00:06:58.692 23:20:19 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:58.692 23:20:19 -- json_config/json_config.sh@396 -- # count=100 00:06:58.692 23:20:19 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:58.692 23:20:19 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:58.692 23:20:19 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:58.692 23:20:19 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:58.692 23:20:19 -- json_config/json_config.sh@398 -- # break 00:06:58.692 23:20:19 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:58.692 23:20:19 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:58.692 23:20:19 -- json_config/json_config.sh@120 -- # local app=target 00:06:58.692 23:20:19 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:58.692 23:20:19 -- json_config/json_config.sh@124 -- # [[ -n 129303 ]] 00:06:58.692 23:20:19 -- json_config/json_config.sh@127 -- # kill -SIGINT 129303 00:06:58.692 23:20:19 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:58.692 23:20:19 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:58.692 23:20:19 -- json_config/json_config.sh@130 -- # kill -0 129303 00:06:58.692 23:20:19 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:59.265 23:20:20 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:59.265 23:20:20 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:59.265 23:20:20 -- json_config/json_config.sh@130 -- # kill -0 129303 00:06:59.265 23:20:20 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:06:59.265 23:20:20 -- json_config/json_config.sh@132 -- # break 00:06:59.265 23:20:20 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:06:59.265 23:20:20 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:06:59.265 SPDK target shutdown done 00:06:59.265 23:20:20 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:06:59.265 INFO: relaunching applications... 00:06:59.265 23:20:20 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:59.265 23:20:20 -- json_config/json_config.sh@98 -- # local app=target 00:06:59.265 23:20:20 -- json_config/json_config.sh@99 -- # shift 00:06:59.265 23:20:20 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:59.265 23:20:20 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:59.265 23:20:20 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:59.265 23:20:20 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:59.265 23:20:20 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:59.265 23:20:20 -- json_config/json_config.sh@111 -- # app_pid[$app]=130766 00:06:59.265 23:20:20 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:59.265 23:20:20 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:59.265 Waiting for target to run... 00:06:59.265 23:20:20 -- json_config/json_config.sh@114 -- # waitforlisten 130766 /var/tmp/spdk_tgt.sock 00:06:59.265 23:20:20 -- common/autotest_common.sh@819 -- # '[' -z 130766 ']' 00:06:59.265 23:20:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:59.265 23:20:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:59.265 23:20:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:59.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:59.265 23:20:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:59.265 23:20:20 -- common/autotest_common.sh@10 -- # set +x 00:06:59.265 [2024-07-11 23:20:20.168940] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:59.265 [2024-07-11 23:20:20.169049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130766 ] 00:06:59.265 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.203 [2024-07-11 23:20:20.789930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.203 [2024-07-11 23:20:20.871383] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:00.203 [2024-07-11 23:20:20.871589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.527 [2024-07-11 23:20:23.897271] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.527 [2024-07-11 23:20:23.929758] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:03.527 23:20:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:03.527 23:20:23 -- common/autotest_common.sh@852 -- # return 0 00:07:03.527 23:20:23 -- json_config/json_config.sh@115 -- # echo '' 00:07:03.527 00:07:03.528 23:20:23 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:07:03.528 23:20:23 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:03.528 INFO: Checking if target configuration is the same... 00:07:03.528 23:20:24 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:03.528 23:20:24 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:07:03.528 23:20:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:03.528 + '[' 2 -ne 2 ']' 00:07:03.528 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:03.528 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:03.528 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:03.528 +++ basename /dev/fd/62 00:07:03.528 ++ mktemp /tmp/62.XXX 00:07:03.528 + tmp_file_1=/tmp/62.dfn 00:07:03.528 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:03.528 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:03.528 + tmp_file_2=/tmp/spdk_tgt_config.json.tUY 00:07:03.528 + ret=0 00:07:03.528 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:03.528 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:03.785 + diff -u /tmp/62.dfn /tmp/spdk_tgt_config.json.tUY 00:07:03.785 + echo 'INFO: JSON config files are the same' 00:07:03.785 INFO: JSON config files are the same 00:07:03.785 + rm /tmp/62.dfn /tmp/spdk_tgt_config.json.tUY 00:07:03.785 + exit 0 00:07:03.785 23:20:24 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:07:03.785 23:20:24 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:03.785 INFO: changing configuration and checking if this can be detected... 00:07:03.785 23:20:24 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:03.785 23:20:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:04.042 23:20:24 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:04.042 23:20:24 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:07:04.042 23:20:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:04.042 + '[' 2 -ne 2 ']' 00:07:04.042 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:04.042 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:04.042 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:04.042 +++ basename /dev/fd/62 00:07:04.042 ++ mktemp /tmp/62.XXX 00:07:04.042 + tmp_file_1=/tmp/62.JJ1 00:07:04.042 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:04.042 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:04.042 + tmp_file_2=/tmp/spdk_tgt_config.json.mPD 00:07:04.042 + ret=0 00:07:04.042 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:04.608 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:04.608 + diff -u /tmp/62.JJ1 /tmp/spdk_tgt_config.json.mPD 00:07:04.608 + ret=1 00:07:04.608 + echo '=== Start of file: /tmp/62.JJ1 ===' 00:07:04.608 + cat /tmp/62.JJ1 00:07:04.608 + echo '=== End of file: /tmp/62.JJ1 ===' 00:07:04.608 + echo '' 00:07:04.608 + echo '=== Start of file: /tmp/spdk_tgt_config.json.mPD ===' 00:07:04.608 + cat /tmp/spdk_tgt_config.json.mPD 00:07:04.608 + echo '=== End of file: /tmp/spdk_tgt_config.json.mPD ===' 00:07:04.608 + echo '' 00:07:04.608 + rm /tmp/62.JJ1 /tmp/spdk_tgt_config.json.mPD 00:07:04.608 + exit 1 00:07:04.608 23:20:25 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:07:04.608 INFO: configuration change detected. 00:07:04.608 23:20:25 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:07:04.608 23:20:25 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:07:04.608 23:20:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:04.608 23:20:25 -- common/autotest_common.sh@10 -- # set +x 00:07:04.608 23:20:25 -- json_config/json_config.sh@360 -- # local ret=0 00:07:04.608 23:20:25 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:07:04.608 23:20:25 -- json_config/json_config.sh@370 -- # [[ -n 130766 ]] 00:07:04.608 23:20:25 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:07:04.608 23:20:25 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:07:04.608 23:20:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:04.608 23:20:25 -- common/autotest_common.sh@10 -- # set +x 00:07:04.608 23:20:25 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:07:04.608 23:20:25 -- json_config/json_config.sh@246 -- # uname -s 00:07:04.608 23:20:25 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:07:04.608 23:20:25 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:07:04.608 23:20:25 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:07:04.608 23:20:25 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:07:04.609 23:20:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:04.609 23:20:25 -- common/autotest_common.sh@10 -- # set +x 00:07:04.609 23:20:25 -- json_config/json_config.sh@376 -- # killprocess 130766 00:07:04.609 23:20:25 -- common/autotest_common.sh@926 -- # '[' -z 130766 ']' 00:07:04.609 23:20:25 -- common/autotest_common.sh@930 -- # kill -0 130766 00:07:04.609 23:20:25 -- common/autotest_common.sh@931 -- # uname 00:07:04.609 23:20:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:04.609 23:20:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130766 00:07:04.609 23:20:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:04.609 23:20:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:04.609 23:20:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130766' 00:07:04.609 killing process with pid 130766 00:07:04.609 23:20:25 -- common/autotest_common.sh@945 -- # kill 130766 00:07:04.609 23:20:25 -- common/autotest_common.sh@950 -- # wait 130766 00:07:06.510 23:20:27 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:06.510 23:20:27 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:07:06.510 23:20:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:06.510 23:20:27 -- common/autotest_common.sh@10 -- # set +x 00:07:06.510 23:20:27 -- json_config/json_config.sh@381 -- # return 0 00:07:06.510 23:20:27 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:07:06.510 INFO: Success 00:07:06.510 00:07:06.510 real 0m17.649s 00:07:06.510 user 0m21.394s 00:07:06.510 sys 0m2.656s 00:07:06.510 23:20:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.510 23:20:27 -- common/autotest_common.sh@10 -- # set +x 00:07:06.510 ************************************ 00:07:06.510 END TEST json_config 00:07:06.510 ************************************ 00:07:06.510 23:20:27 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:06.510 23:20:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:06.510 23:20:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.510 23:20:27 -- common/autotest_common.sh@10 -- # set +x 00:07:06.511 ************************************ 00:07:06.511 START TEST json_config_extra_key 00:07:06.511 ************************************ 00:07:06.511 23:20:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.511 23:20:27 -- nvmf/common.sh@7 -- # uname -s 00:07:06.511 23:20:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.511 23:20:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.511 23:20:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.511 23:20:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.511 23:20:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.511 23:20:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.511 23:20:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.511 23:20:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.511 23:20:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.511 23:20:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.511 23:20:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:06.511 23:20:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:06.511 23:20:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.511 23:20:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.511 23:20:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:06.511 23:20:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.511 23:20:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.511 23:20:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.511 23:20:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.511 23:20:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.511 23:20:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.511 23:20:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.511 23:20:27 -- paths/export.sh@5 -- # export PATH 00:07:06.511 23:20:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.511 23:20:27 -- nvmf/common.sh@46 -- # : 0 00:07:06.511 23:20:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:06.511 23:20:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:06.511 23:20:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:06.511 23:20:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.511 23:20:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.511 23:20:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:06.511 23:20:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:06.511 23:20:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:07:06.511 INFO: launching applications... 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@25 -- # shift 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=131704 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:07:06.511 Waiting for target to run... 00:07:06.511 23:20:27 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 131704 /var/tmp/spdk_tgt.sock 00:07:06.511 23:20:27 -- common/autotest_common.sh@819 -- # '[' -z 131704 ']' 00:07:06.511 23:20:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:06.511 23:20:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:06.511 23:20:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:06.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:06.511 23:20:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:06.511 23:20:27 -- common/autotest_common.sh@10 -- # set +x 00:07:06.511 [2024-07-11 23:20:27.278688] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:06.511 [2024-07-11 23:20:27.278876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131704 ] 00:07:06.511 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.080 [2024-07-11 23:20:27.834564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.080 [2024-07-11 23:20:27.897274] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:07.080 [2024-07-11 23:20:27.897460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.339 23:20:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:07.339 23:20:28 -- common/autotest_common.sh@852 -- # return 0 00:07:07.339 23:20:28 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:07:07.339 00:07:07.339 23:20:28 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:07:07.339 INFO: shutting down applications... 00:07:07.339 23:20:28 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:07:07.339 23:20:28 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:07:07.339 23:20:28 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:07:07.339 23:20:28 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 131704 ]] 00:07:07.339 23:20:28 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 131704 00:07:07.339 23:20:28 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:07:07.339 23:20:28 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:07.339 23:20:28 -- json_config/json_config_extra_key.sh@50 -- # kill -0 131704 00:07:07.339 23:20:28 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:07.905 23:20:28 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:07.905 23:20:28 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:07.905 23:20:28 -- json_config/json_config_extra_key.sh@50 -- # kill -0 131704 00:07:07.905 23:20:28 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:07:07.905 23:20:28 -- json_config/json_config_extra_key.sh@52 -- # break 00:07:07.905 23:20:28 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:07:07.905 23:20:28 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:07:07.905 SPDK target shutdown done 00:07:07.905 23:20:28 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:07:07.905 Success 00:07:07.905 00:07:07.905 real 0m1.638s 00:07:07.905 user 0m1.478s 00:07:07.905 sys 0m0.650s 00:07:07.905 23:20:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.905 23:20:28 -- common/autotest_common.sh@10 -- # set +x 00:07:07.905 ************************************ 00:07:07.905 END TEST json_config_extra_key 00:07:07.905 ************************************ 00:07:07.905 23:20:28 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:07.905 23:20:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:07.905 23:20:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.905 23:20:28 -- common/autotest_common.sh@10 -- # set +x 00:07:07.905 ************************************ 00:07:07.905 START TEST alias_rpc 00:07:07.905 ************************************ 00:07:07.905 23:20:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:07.905 * Looking for test storage... 00:07:08.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:08.163 23:20:28 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:08.163 23:20:28 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=131891 00:07:08.163 23:20:28 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:08.163 23:20:28 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 131891 00:07:08.163 23:20:28 -- common/autotest_common.sh@819 -- # '[' -z 131891 ']' 00:07:08.163 23:20:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.163 23:20:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:08.163 23:20:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.163 23:20:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:08.163 23:20:28 -- common/autotest_common.sh@10 -- # set +x 00:07:08.163 [2024-07-11 23:20:28.913602] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:08.163 [2024-07-11 23:20:28.913697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131891 ] 00:07:08.163 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.163 [2024-07-11 23:20:28.986375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.163 [2024-07-11 23:20:29.078283] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:08.163 [2024-07-11 23:20:29.078481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.536 23:20:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:09.536 23:20:30 -- common/autotest_common.sh@852 -- # return 0 00:07:09.536 23:20:30 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:10.101 23:20:30 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 131891 00:07:10.101 23:20:30 -- common/autotest_common.sh@926 -- # '[' -z 131891 ']' 00:07:10.101 23:20:30 -- common/autotest_common.sh@930 -- # kill -0 131891 00:07:10.101 23:20:30 -- common/autotest_common.sh@931 -- # uname 00:07:10.101 23:20:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:10.101 23:20:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131891 00:07:10.101 23:20:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:10.101 23:20:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:10.101 23:20:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131891' 00:07:10.101 killing process with pid 131891 00:07:10.101 23:20:30 -- common/autotest_common.sh@945 -- # kill 131891 00:07:10.101 23:20:30 -- common/autotest_common.sh@950 -- # wait 131891 00:07:10.360 00:07:10.360 real 0m2.442s 00:07:10.360 user 0m3.233s 00:07:10.360 sys 0m0.578s 00:07:10.360 23:20:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.360 23:20:31 -- common/autotest_common.sh@10 -- # set +x 00:07:10.360 ************************************ 00:07:10.360 END TEST alias_rpc 00:07:10.360 ************************************ 00:07:10.360 23:20:31 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:07:10.360 23:20:31 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:10.360 23:20:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:10.360 23:20:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.360 23:20:31 -- common/autotest_common.sh@10 -- # set +x 00:07:10.360 ************************************ 00:07:10.360 START TEST spdkcli_tcp 00:07:10.360 ************************************ 00:07:10.360 23:20:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:10.618 * Looking for test storage... 00:07:10.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:10.618 23:20:31 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:10.618 23:20:31 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:10.618 23:20:31 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:10.618 23:20:31 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:10.618 23:20:31 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:10.618 23:20:31 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:10.619 23:20:31 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:10.619 23:20:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:10.619 23:20:31 -- common/autotest_common.sh@10 -- # set +x 00:07:10.619 23:20:31 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=132341 00:07:10.619 23:20:31 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:10.619 23:20:31 -- spdkcli/tcp.sh@27 -- # waitforlisten 132341 00:07:10.619 23:20:31 -- common/autotest_common.sh@819 -- # '[' -z 132341 ']' 00:07:10.619 23:20:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.619 23:20:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:10.619 23:20:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.619 23:20:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:10.619 23:20:31 -- common/autotest_common.sh@10 -- # set +x 00:07:10.619 [2024-07-11 23:20:31.434380] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:10.619 [2024-07-11 23:20:31.434572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132341 ] 00:07:10.619 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.619 [2024-07-11 23:20:31.530760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.877 [2024-07-11 23:20:31.624183] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.877 [2024-07-11 23:20:31.624389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.877 [2024-07-11 23:20:31.624396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.810 23:20:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:11.810 23:20:32 -- common/autotest_common.sh@852 -- # return 0 00:07:11.810 23:20:32 -- spdkcli/tcp.sh@31 -- # socat_pid=132484 00:07:11.810 23:20:32 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:11.810 23:20:32 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:12.376 [ 00:07:12.376 "bdev_malloc_delete", 00:07:12.376 "bdev_malloc_create", 00:07:12.376 "bdev_null_resize", 00:07:12.376 "bdev_null_delete", 00:07:12.376 "bdev_null_create", 00:07:12.376 "bdev_nvme_cuse_unregister", 00:07:12.376 "bdev_nvme_cuse_register", 00:07:12.376 "bdev_opal_new_user", 00:07:12.376 "bdev_opal_set_lock_state", 00:07:12.376 "bdev_opal_delete", 00:07:12.376 "bdev_opal_get_info", 00:07:12.376 "bdev_opal_create", 00:07:12.376 "bdev_nvme_opal_revert", 00:07:12.376 "bdev_nvme_opal_init", 00:07:12.376 "bdev_nvme_send_cmd", 00:07:12.376 "bdev_nvme_get_path_iostat", 00:07:12.376 "bdev_nvme_get_mdns_discovery_info", 00:07:12.376 "bdev_nvme_stop_mdns_discovery", 00:07:12.376 "bdev_nvme_start_mdns_discovery", 00:07:12.376 "bdev_nvme_set_multipath_policy", 00:07:12.376 "bdev_nvme_set_preferred_path", 00:07:12.376 "bdev_nvme_get_io_paths", 00:07:12.376 "bdev_nvme_remove_error_injection", 00:07:12.376 "bdev_nvme_add_error_injection", 00:07:12.376 "bdev_nvme_get_discovery_info", 00:07:12.376 "bdev_nvme_stop_discovery", 00:07:12.376 "bdev_nvme_start_discovery", 00:07:12.376 "bdev_nvme_get_controller_health_info", 00:07:12.376 "bdev_nvme_disable_controller", 00:07:12.376 "bdev_nvme_enable_controller", 00:07:12.376 "bdev_nvme_reset_controller", 00:07:12.376 "bdev_nvme_get_transport_statistics", 00:07:12.376 "bdev_nvme_apply_firmware", 00:07:12.376 "bdev_nvme_detach_controller", 00:07:12.376 "bdev_nvme_get_controllers", 00:07:12.376 "bdev_nvme_attach_controller", 00:07:12.376 "bdev_nvme_set_hotplug", 00:07:12.376 "bdev_nvme_set_options", 00:07:12.376 "bdev_passthru_delete", 00:07:12.376 "bdev_passthru_create", 00:07:12.376 "bdev_lvol_grow_lvstore", 00:07:12.376 "bdev_lvol_get_lvols", 00:07:12.376 "bdev_lvol_get_lvstores", 00:07:12.376 "bdev_lvol_delete", 00:07:12.376 "bdev_lvol_set_read_only", 00:07:12.376 "bdev_lvol_resize", 00:07:12.376 "bdev_lvol_decouple_parent", 00:07:12.376 "bdev_lvol_inflate", 00:07:12.376 "bdev_lvol_rename", 00:07:12.376 "bdev_lvol_clone_bdev", 00:07:12.376 "bdev_lvol_clone", 00:07:12.376 "bdev_lvol_snapshot", 00:07:12.376 "bdev_lvol_create", 00:07:12.376 "bdev_lvol_delete_lvstore", 00:07:12.376 "bdev_lvol_rename_lvstore", 00:07:12.376 "bdev_lvol_create_lvstore", 00:07:12.376 "bdev_raid_set_options", 00:07:12.376 "bdev_raid_remove_base_bdev", 00:07:12.376 "bdev_raid_add_base_bdev", 00:07:12.376 "bdev_raid_delete", 00:07:12.376 "bdev_raid_create", 00:07:12.376 "bdev_raid_get_bdevs", 00:07:12.376 "bdev_error_inject_error", 00:07:12.376 "bdev_error_delete", 00:07:12.377 "bdev_error_create", 00:07:12.377 "bdev_split_delete", 00:07:12.377 "bdev_split_create", 00:07:12.377 "bdev_delay_delete", 00:07:12.377 "bdev_delay_create", 00:07:12.377 "bdev_delay_update_latency", 00:07:12.377 "bdev_zone_block_delete", 00:07:12.377 "bdev_zone_block_create", 00:07:12.377 "blobfs_create", 00:07:12.377 "blobfs_detect", 00:07:12.377 "blobfs_set_cache_size", 00:07:12.377 "bdev_aio_delete", 00:07:12.377 "bdev_aio_rescan", 00:07:12.377 "bdev_aio_create", 00:07:12.377 "bdev_ftl_set_property", 00:07:12.377 "bdev_ftl_get_properties", 00:07:12.377 "bdev_ftl_get_stats", 00:07:12.377 "bdev_ftl_unmap", 00:07:12.377 "bdev_ftl_unload", 00:07:12.377 "bdev_ftl_delete", 00:07:12.377 "bdev_ftl_load", 00:07:12.377 "bdev_ftl_create", 00:07:12.377 "bdev_virtio_attach_controller", 00:07:12.377 "bdev_virtio_scsi_get_devices", 00:07:12.377 "bdev_virtio_detach_controller", 00:07:12.377 "bdev_virtio_blk_set_hotplug", 00:07:12.377 "bdev_iscsi_delete", 00:07:12.377 "bdev_iscsi_create", 00:07:12.377 "bdev_iscsi_set_options", 00:07:12.377 "accel_error_inject_error", 00:07:12.377 "ioat_scan_accel_module", 00:07:12.377 "dsa_scan_accel_module", 00:07:12.377 "iaa_scan_accel_module", 00:07:12.377 "vfu_virtio_create_scsi_endpoint", 00:07:12.377 "vfu_virtio_scsi_remove_target", 00:07:12.377 "vfu_virtio_scsi_add_target", 00:07:12.377 "vfu_virtio_create_blk_endpoint", 00:07:12.377 "vfu_virtio_delete_endpoint", 00:07:12.377 "iscsi_set_options", 00:07:12.377 "iscsi_get_auth_groups", 00:07:12.377 "iscsi_auth_group_remove_secret", 00:07:12.377 "iscsi_auth_group_add_secret", 00:07:12.377 "iscsi_delete_auth_group", 00:07:12.377 "iscsi_create_auth_group", 00:07:12.377 "iscsi_set_discovery_auth", 00:07:12.377 "iscsi_get_options", 00:07:12.377 "iscsi_target_node_request_logout", 00:07:12.377 "iscsi_target_node_set_redirect", 00:07:12.377 "iscsi_target_node_set_auth", 00:07:12.377 "iscsi_target_node_add_lun", 00:07:12.377 "iscsi_get_connections", 00:07:12.377 "iscsi_portal_group_set_auth", 00:07:12.377 "iscsi_start_portal_group", 00:07:12.377 "iscsi_delete_portal_group", 00:07:12.377 "iscsi_create_portal_group", 00:07:12.377 "iscsi_get_portal_groups", 00:07:12.377 "iscsi_delete_target_node", 00:07:12.377 "iscsi_target_node_remove_pg_ig_maps", 00:07:12.377 "iscsi_target_node_add_pg_ig_maps", 00:07:12.377 "iscsi_create_target_node", 00:07:12.377 "iscsi_get_target_nodes", 00:07:12.377 "iscsi_delete_initiator_group", 00:07:12.377 "iscsi_initiator_group_remove_initiators", 00:07:12.377 "iscsi_initiator_group_add_initiators", 00:07:12.377 "iscsi_create_initiator_group", 00:07:12.377 "iscsi_get_initiator_groups", 00:07:12.377 "nvmf_set_crdt", 00:07:12.377 "nvmf_set_config", 00:07:12.377 "nvmf_set_max_subsystems", 00:07:12.377 "nvmf_subsystem_get_listeners", 00:07:12.377 "nvmf_subsystem_get_qpairs", 00:07:12.377 "nvmf_subsystem_get_controllers", 00:07:12.377 "nvmf_get_stats", 00:07:12.377 "nvmf_get_transports", 00:07:12.377 "nvmf_create_transport", 00:07:12.377 "nvmf_get_targets", 00:07:12.377 "nvmf_delete_target", 00:07:12.377 "nvmf_create_target", 00:07:12.377 "nvmf_subsystem_allow_any_host", 00:07:12.377 "nvmf_subsystem_remove_host", 00:07:12.377 "nvmf_subsystem_add_host", 00:07:12.377 "nvmf_subsystem_remove_ns", 00:07:12.377 "nvmf_subsystem_add_ns", 00:07:12.377 "nvmf_subsystem_listener_set_ana_state", 00:07:12.377 "nvmf_discovery_get_referrals", 00:07:12.377 "nvmf_discovery_remove_referral", 00:07:12.377 "nvmf_discovery_add_referral", 00:07:12.377 "nvmf_subsystem_remove_listener", 00:07:12.377 "nvmf_subsystem_add_listener", 00:07:12.377 "nvmf_delete_subsystem", 00:07:12.377 "nvmf_create_subsystem", 00:07:12.377 "nvmf_get_subsystems", 00:07:12.377 "env_dpdk_get_mem_stats", 00:07:12.377 "nbd_get_disks", 00:07:12.377 "nbd_stop_disk", 00:07:12.377 "nbd_start_disk", 00:07:12.377 "ublk_recover_disk", 00:07:12.377 "ublk_get_disks", 00:07:12.377 "ublk_stop_disk", 00:07:12.377 "ublk_start_disk", 00:07:12.377 "ublk_destroy_target", 00:07:12.377 "ublk_create_target", 00:07:12.377 "virtio_blk_create_transport", 00:07:12.377 "virtio_blk_get_transports", 00:07:12.377 "vhost_controller_set_coalescing", 00:07:12.377 "vhost_get_controllers", 00:07:12.377 "vhost_delete_controller", 00:07:12.377 "vhost_create_blk_controller", 00:07:12.377 "vhost_scsi_controller_remove_target", 00:07:12.377 "vhost_scsi_controller_add_target", 00:07:12.377 "vhost_start_scsi_controller", 00:07:12.377 "vhost_create_scsi_controller", 00:07:12.377 "thread_set_cpumask", 00:07:12.377 "framework_get_scheduler", 00:07:12.377 "framework_set_scheduler", 00:07:12.377 "framework_get_reactors", 00:07:12.377 "thread_get_io_channels", 00:07:12.377 "thread_get_pollers", 00:07:12.377 "thread_get_stats", 00:07:12.377 "framework_monitor_context_switch", 00:07:12.377 "spdk_kill_instance", 00:07:12.377 "log_enable_timestamps", 00:07:12.377 "log_get_flags", 00:07:12.377 "log_clear_flag", 00:07:12.377 "log_set_flag", 00:07:12.377 "log_get_level", 00:07:12.377 "log_set_level", 00:07:12.377 "log_get_print_level", 00:07:12.377 "log_set_print_level", 00:07:12.377 "framework_enable_cpumask_locks", 00:07:12.377 "framework_disable_cpumask_locks", 00:07:12.377 "framework_wait_init", 00:07:12.377 "framework_start_init", 00:07:12.377 "scsi_get_devices", 00:07:12.377 "bdev_get_histogram", 00:07:12.377 "bdev_enable_histogram", 00:07:12.377 "bdev_set_qos_limit", 00:07:12.377 "bdev_set_qd_sampling_period", 00:07:12.377 "bdev_get_bdevs", 00:07:12.377 "bdev_reset_iostat", 00:07:12.377 "bdev_get_iostat", 00:07:12.377 "bdev_examine", 00:07:12.377 "bdev_wait_for_examine", 00:07:12.377 "bdev_set_options", 00:07:12.377 "notify_get_notifications", 00:07:12.377 "notify_get_types", 00:07:12.377 "accel_get_stats", 00:07:12.377 "accel_set_options", 00:07:12.377 "accel_set_driver", 00:07:12.377 "accel_crypto_key_destroy", 00:07:12.377 "accel_crypto_keys_get", 00:07:12.377 "accel_crypto_key_create", 00:07:12.377 "accel_assign_opc", 00:07:12.377 "accel_get_module_info", 00:07:12.377 "accel_get_opc_assignments", 00:07:12.377 "vmd_rescan", 00:07:12.377 "vmd_remove_device", 00:07:12.377 "vmd_enable", 00:07:12.377 "sock_set_default_impl", 00:07:12.377 "sock_impl_set_options", 00:07:12.377 "sock_impl_get_options", 00:07:12.377 "iobuf_get_stats", 00:07:12.377 "iobuf_set_options", 00:07:12.377 "framework_get_pci_devices", 00:07:12.377 "framework_get_config", 00:07:12.377 "framework_get_subsystems", 00:07:12.377 "vfu_tgt_set_base_path", 00:07:12.377 "trace_get_info", 00:07:12.377 "trace_get_tpoint_group_mask", 00:07:12.377 "trace_disable_tpoint_group", 00:07:12.377 "trace_enable_tpoint_group", 00:07:12.377 "trace_clear_tpoint_mask", 00:07:12.377 "trace_set_tpoint_mask", 00:07:12.377 "spdk_get_version", 00:07:12.377 "rpc_get_methods" 00:07:12.377 ] 00:07:12.377 23:20:33 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:12.377 23:20:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:12.377 23:20:33 -- common/autotest_common.sh@10 -- # set +x 00:07:12.377 23:20:33 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:12.377 23:20:33 -- spdkcli/tcp.sh@38 -- # killprocess 132341 00:07:12.377 23:20:33 -- common/autotest_common.sh@926 -- # '[' -z 132341 ']' 00:07:12.377 23:20:33 -- common/autotest_common.sh@930 -- # kill -0 132341 00:07:12.377 23:20:33 -- common/autotest_common.sh@931 -- # uname 00:07:12.377 23:20:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:12.377 23:20:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132341 00:07:12.377 23:20:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:12.377 23:20:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:12.377 23:20:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132341' 00:07:12.377 killing process with pid 132341 00:07:12.377 23:20:33 -- common/autotest_common.sh@945 -- # kill 132341 00:07:12.377 23:20:33 -- common/autotest_common.sh@950 -- # wait 132341 00:07:12.943 00:07:12.943 real 0m2.337s 00:07:12.943 user 0m4.882s 00:07:12.943 sys 0m0.612s 00:07:12.943 23:20:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.943 23:20:33 -- common/autotest_common.sh@10 -- # set +x 00:07:12.943 ************************************ 00:07:12.943 END TEST spdkcli_tcp 00:07:12.943 ************************************ 00:07:12.943 23:20:33 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:12.943 23:20:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:12.943 23:20:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.943 23:20:33 -- common/autotest_common.sh@10 -- # set +x 00:07:12.943 ************************************ 00:07:12.943 START TEST dpdk_mem_utility 00:07:12.943 ************************************ 00:07:12.943 23:20:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:12.943 * Looking for test storage... 00:07:12.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:12.943 23:20:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:12.943 23:20:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=132681 00:07:12.943 23:20:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:12.943 23:20:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 132681 00:07:12.943 23:20:33 -- common/autotest_common.sh@819 -- # '[' -z 132681 ']' 00:07:12.943 23:20:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.943 23:20:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:12.943 23:20:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.943 23:20:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:12.944 23:20:33 -- common/autotest_common.sh@10 -- # set +x 00:07:12.944 [2024-07-11 23:20:33.765058] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:12.944 [2024-07-11 23:20:33.765173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132681 ] 00:07:12.944 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.944 [2024-07-11 23:20:33.836054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.202 [2024-07-11 23:20:33.928426] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:13.202 [2024-07-11 23:20:33.928604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.136 23:20:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:14.136 23:20:34 -- common/autotest_common.sh@852 -- # return 0 00:07:14.136 23:20:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:14.136 23:20:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:14.136 23:20:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:14.136 23:20:34 -- common/autotest_common.sh@10 -- # set +x 00:07:14.136 { 00:07:14.136 "filename": "/tmp/spdk_mem_dump.txt" 00:07:14.136 } 00:07:14.136 23:20:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:14.136 23:20:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:14.136 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:14.136 1 heaps totaling size 814.000000 MiB 00:07:14.136 size: 814.000000 MiB heap id: 0 00:07:14.136 end heaps---------- 00:07:14.136 8 mempools totaling size 598.116089 MiB 00:07:14.136 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:14.136 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:14.136 size: 84.521057 MiB name: bdev_io_132681 00:07:14.136 size: 51.011292 MiB name: evtpool_132681 00:07:14.136 size: 50.003479 MiB name: msgpool_132681 00:07:14.136 size: 21.763794 MiB name: PDU_Pool 00:07:14.136 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:14.136 size: 0.026123 MiB name: Session_Pool 00:07:14.136 end mempools------- 00:07:14.136 6 memzones totaling size 4.142822 MiB 00:07:14.136 size: 1.000366 MiB name: RG_ring_0_132681 00:07:14.136 size: 1.000366 MiB name: RG_ring_1_132681 00:07:14.136 size: 1.000366 MiB name: RG_ring_4_132681 00:07:14.136 size: 1.000366 MiB name: RG_ring_5_132681 00:07:14.136 size: 0.125366 MiB name: RG_ring_2_132681 00:07:14.136 size: 0.015991 MiB name: RG_ring_3_132681 00:07:14.136 end memzones------- 00:07:14.136 23:20:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:14.136 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:14.136 list of free elements. size: 12.519348 MiB 00:07:14.136 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:14.136 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:14.136 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:14.136 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:14.136 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:14.136 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:14.136 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:14.136 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:14.136 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:14.136 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:14.136 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:14.136 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:14.136 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:14.136 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:14.136 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:14.136 list of standard malloc elements. size: 199.218079 MiB 00:07:14.136 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:14.136 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:14.136 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:14.136 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:14.136 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:14.136 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:14.136 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:14.136 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:14.136 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:14.136 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:14.136 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:14.136 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:14.136 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:14.136 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:14.136 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:14.136 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:14.136 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:14.136 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:14.136 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:14.136 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:14.136 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:14.136 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:14.136 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:14.136 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:14.136 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:14.136 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:14.136 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:14.136 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:14.136 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:14.136 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:14.136 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:14.136 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:14.136 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:14.136 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:14.136 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:14.136 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:14.136 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:14.136 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:14.136 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:14.136 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:14.136 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:14.136 list of memzone associated elements. size: 602.262573 MiB 00:07:14.136 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:14.136 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:14.136 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:14.136 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:14.136 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:14.136 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_132681_0 00:07:14.136 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:14.136 associated memzone info: size: 48.002930 MiB name: MP_evtpool_132681_0 00:07:14.136 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:14.136 associated memzone info: size: 48.002930 MiB name: MP_msgpool_132681_0 00:07:14.136 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:14.136 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:14.136 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:14.136 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:14.136 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:14.136 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_132681 00:07:14.136 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:14.136 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_132681 00:07:14.137 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:14.137 associated memzone info: size: 1.007996 MiB name: MP_evtpool_132681 00:07:14.137 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:14.137 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:14.137 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:14.137 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:14.137 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:14.137 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:14.137 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:14.137 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:14.137 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:14.137 associated memzone info: size: 1.000366 MiB name: RG_ring_0_132681 00:07:14.137 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:14.137 associated memzone info: size: 1.000366 MiB name: RG_ring_1_132681 00:07:14.137 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:14.137 associated memzone info: size: 1.000366 MiB name: RG_ring_4_132681 00:07:14.137 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:14.137 associated memzone info: size: 1.000366 MiB name: RG_ring_5_132681 00:07:14.137 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:14.137 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_132681 00:07:14.137 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:14.137 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:14.137 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:14.137 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:14.137 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:14.137 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:14.137 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:14.137 associated memzone info: size: 0.125366 MiB name: RG_ring_2_132681 00:07:14.137 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:14.137 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:14.137 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:14.137 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:14.137 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:14.137 associated memzone info: size: 0.015991 MiB name: RG_ring_3_132681 00:07:14.137 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:14.137 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:14.137 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:14.137 associated memzone info: size: 0.000183 MiB name: MP_msgpool_132681 00:07:14.137 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:14.137 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_132681 00:07:14.137 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:14.137 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:14.137 23:20:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:14.137 23:20:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 132681 00:07:14.137 23:20:34 -- common/autotest_common.sh@926 -- # '[' -z 132681 ']' 00:07:14.137 23:20:34 -- common/autotest_common.sh@930 -- # kill -0 132681 00:07:14.137 23:20:34 -- common/autotest_common.sh@931 -- # uname 00:07:14.137 23:20:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:14.137 23:20:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132681 00:07:14.137 23:20:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:14.137 23:20:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:14.137 23:20:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132681' 00:07:14.137 killing process with pid 132681 00:07:14.137 23:20:34 -- common/autotest_common.sh@945 -- # kill 132681 00:07:14.137 23:20:34 -- common/autotest_common.sh@950 -- # wait 132681 00:07:14.704 00:07:14.704 real 0m1.734s 00:07:14.704 user 0m1.947s 00:07:14.704 sys 0m0.479s 00:07:14.704 23:20:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.704 23:20:35 -- common/autotest_common.sh@10 -- # set +x 00:07:14.704 ************************************ 00:07:14.704 END TEST dpdk_mem_utility 00:07:14.704 ************************************ 00:07:14.704 23:20:35 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:14.704 23:20:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.704 23:20:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.704 23:20:35 -- common/autotest_common.sh@10 -- # set +x 00:07:14.704 ************************************ 00:07:14.704 START TEST event 00:07:14.704 ************************************ 00:07:14.704 23:20:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:14.704 * Looking for test storage... 00:07:14.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:14.704 23:20:35 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:14.704 23:20:35 -- bdev/nbd_common.sh@6 -- # set -e 00:07:14.704 23:20:35 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:14.704 23:20:35 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:14.704 23:20:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.704 23:20:35 -- common/autotest_common.sh@10 -- # set +x 00:07:14.704 ************************************ 00:07:14.704 START TEST event_perf 00:07:14.704 ************************************ 00:07:14.704 23:20:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:14.704 Running I/O for 1 seconds...[2024-07-11 23:20:35.506436] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:14.704 [2024-07-11 23:20:35.506609] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132878 ] 00:07:14.704 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.704 [2024-07-11 23:20:35.600765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.962 [2024-07-11 23:20:35.695428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.962 [2024-07-11 23:20:35.695481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.962 [2024-07-11 23:20:35.695531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.962 [2024-07-11 23:20:35.695534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.896 Running I/O for 1 seconds... 00:07:15.896 lcore 0: 207897 00:07:15.896 lcore 1: 207895 00:07:15.896 lcore 2: 207896 00:07:15.896 lcore 3: 207895 00:07:15.896 done. 00:07:15.896 00:07:15.896 real 0m1.297s 00:07:15.896 user 0m4.174s 00:07:15.896 sys 0m0.117s 00:07:15.896 23:20:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.897 23:20:36 -- common/autotest_common.sh@10 -- # set +x 00:07:15.897 ************************************ 00:07:15.897 END TEST event_perf 00:07:15.897 ************************************ 00:07:15.897 23:20:36 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:15.897 23:20:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:15.897 23:20:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.897 23:20:36 -- common/autotest_common.sh@10 -- # set +x 00:07:15.897 ************************************ 00:07:15.897 START TEST event_reactor 00:07:15.897 ************************************ 00:07:15.897 23:20:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:15.897 [2024-07-11 23:20:36.842220] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:15.897 [2024-07-11 23:20:36.842307] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133037 ] 00:07:16.155 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.155 [2024-07-11 23:20:36.939602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.155 [2024-07-11 23:20:37.032491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.531 test_start 00:07:17.531 oneshot 00:07:17.531 tick 100 00:07:17.531 tick 100 00:07:17.531 tick 250 00:07:17.531 tick 100 00:07:17.531 tick 100 00:07:17.531 tick 100 00:07:17.531 tick 250 00:07:17.531 tick 500 00:07:17.531 tick 100 00:07:17.531 tick 100 00:07:17.531 tick 250 00:07:17.531 tick 100 00:07:17.531 tick 100 00:07:17.531 test_end 00:07:17.531 00:07:17.531 real 0m1.296s 00:07:17.531 user 0m1.175s 00:07:17.531 sys 0m0.116s 00:07:17.531 23:20:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.531 23:20:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.531 ************************************ 00:07:17.531 END TEST event_reactor 00:07:17.531 ************************************ 00:07:17.531 23:20:38 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:17.531 23:20:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:17.531 23:20:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.531 23:20:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.531 ************************************ 00:07:17.531 START TEST event_reactor_perf 00:07:17.531 ************************************ 00:07:17.531 23:20:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:17.531 [2024-07-11 23:20:38.161734] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:17.531 [2024-07-11 23:20:38.161832] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133298 ] 00:07:17.531 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.531 [2024-07-11 23:20:38.235738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.531 [2024-07-11 23:20:38.329412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.466 test_start 00:07:18.466 test_end 00:07:18.466 Performance: 351160 events per second 00:07:18.466 00:07:18.466 real 0m1.263s 00:07:18.466 user 0m1.155s 00:07:18.466 sys 0m0.102s 00:07:18.466 23:20:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.466 23:20:39 -- common/autotest_common.sh@10 -- # set +x 00:07:18.466 ************************************ 00:07:18.466 END TEST event_reactor_perf 00:07:18.466 ************************************ 00:07:18.725 23:20:39 -- event/event.sh@49 -- # uname -s 00:07:18.725 23:20:39 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:18.725 23:20:39 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:18.725 23:20:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:18.725 23:20:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.725 23:20:39 -- common/autotest_common.sh@10 -- # set +x 00:07:18.725 ************************************ 00:07:18.725 START TEST event_scheduler 00:07:18.725 ************************************ 00:07:18.725 23:20:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:18.725 * Looking for test storage... 00:07:18.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:18.725 23:20:39 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:18.725 23:20:39 -- scheduler/scheduler.sh@35 -- # scheduler_pid=133505 00:07:18.725 23:20:39 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:18.725 23:20:39 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.725 23:20:39 -- scheduler/scheduler.sh@37 -- # waitforlisten 133505 00:07:18.726 23:20:39 -- common/autotest_common.sh@819 -- # '[' -z 133505 ']' 00:07:18.726 23:20:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.726 23:20:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:18.726 23:20:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.726 23:20:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:18.726 23:20:39 -- common/autotest_common.sh@10 -- # set +x 00:07:18.726 [2024-07-11 23:20:39.551292] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:18.726 [2024-07-11 23:20:39.551385] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133505 ] 00:07:18.726 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.726 [2024-07-11 23:20:39.649575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.983 [2024-07-11 23:20:39.797697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.983 [2024-07-11 23:20:39.797749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.983 [2024-07-11 23:20:39.797800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.983 [2024-07-11 23:20:39.797803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.241 23:20:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:19.241 23:20:40 -- common/autotest_common.sh@852 -- # return 0 00:07:19.241 23:20:40 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:19.241 23:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.241 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.241 POWER: Env isn't set yet! 00:07:19.241 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:19.241 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:07:19.241 POWER: Cannot get available frequencies of lcore 0 00:07:19.241 POWER: Attempting to initialise PSTAT power management... 00:07:19.241 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:07:19.241 POWER: Initialized successfully for lcore 0 power management 00:07:19.241 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:07:19.241 POWER: Initialized successfully for lcore 1 power management 00:07:19.241 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:07:19.241 POWER: Initialized successfully for lcore 2 power management 00:07:19.241 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:07:19.241 POWER: Initialized successfully for lcore 3 power management 00:07:19.241 [2024-07-11 23:20:40.015988] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:19.241 [2024-07-11 23:20:40.016007] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:19.241 [2024-07-11 23:20:40.016019] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:19.242 23:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.242 23:20:40 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:19.242 23:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.242 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.242 [2024-07-11 23:20:40.120974] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:19.242 23:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.242 23:20:40 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:19.242 23:20:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:19.242 23:20:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.242 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.242 ************************************ 00:07:19.242 START TEST scheduler_create_thread 00:07:19.242 ************************************ 00:07:19.242 23:20:40 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:07:19.242 23:20:40 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:19.242 23:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.242 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.242 2 00:07:19.242 23:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.242 23:20:40 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:19.242 23:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.242 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.242 3 00:07:19.242 23:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.242 23:20:40 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:19.242 23:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.242 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.242 4 00:07:19.242 23:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.242 23:20:40 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:19.242 23:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.242 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.242 5 00:07:19.242 23:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.242 23:20:40 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:19.242 23:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.242 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.500 6 00:07:19.500 23:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.500 23:20:40 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:19.500 23:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.500 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.500 7 00:07:19.500 23:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.500 23:20:40 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:19.500 23:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.500 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.500 8 00:07:19.500 23:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.500 23:20:40 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:19.500 23:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.500 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.500 9 00:07:19.500 23:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.500 23:20:40 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:19.500 23:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.500 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.500 10 00:07:19.500 23:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.500 23:20:40 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:19.500 23:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.500 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.500 23:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.501 23:20:40 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:19.501 23:20:40 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:19.501 23:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.501 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.501 23:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.501 23:20:40 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:19.501 23:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.501 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:20.873 23:20:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.873 23:20:41 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:20.873 23:20:41 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:20.873 23:20:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.873 23:20:41 -- common/autotest_common.sh@10 -- # set +x 00:07:21.810 23:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:21.810 00:07:21.810 real 0m2.623s 00:07:21.810 user 0m0.009s 00:07:21.810 sys 0m0.006s 00:07:21.810 23:20:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.810 23:20:42 -- common/autotest_common.sh@10 -- # set +x 00:07:21.810 ************************************ 00:07:21.810 END TEST scheduler_create_thread 00:07:21.810 ************************************ 00:07:22.089 23:20:42 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:22.089 23:20:42 -- scheduler/scheduler.sh@46 -- # killprocess 133505 00:07:22.089 23:20:42 -- common/autotest_common.sh@926 -- # '[' -z 133505 ']' 00:07:22.089 23:20:42 -- common/autotest_common.sh@930 -- # kill -0 133505 00:07:22.089 23:20:42 -- common/autotest_common.sh@931 -- # uname 00:07:22.089 23:20:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:22.089 23:20:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133505 00:07:22.089 23:20:42 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:07:22.089 23:20:42 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:07:22.089 23:20:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133505' 00:07:22.089 killing process with pid 133505 00:07:22.089 23:20:42 -- common/autotest_common.sh@945 -- # kill 133505 00:07:22.089 23:20:42 -- common/autotest_common.sh@950 -- # wait 133505 00:07:22.353 [2024-07-11 23:20:43.232447] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:22.611 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:07:22.611 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:07:22.611 POWER: Power management governor of lcore 1 has been set to 'userspace' successfully 00:07:22.611 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:07:22.611 POWER: Power management governor of lcore 2 has been set to 'userspace' successfully 00:07:22.611 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:07:22.611 POWER: Power management governor of lcore 3 has been set to 'userspace' successfully 00:07:22.611 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:07:22.611 00:07:22.611 real 0m4.027s 00:07:22.611 user 0m6.571s 00:07:22.611 sys 0m0.390s 00:07:22.611 23:20:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.611 23:20:43 -- common/autotest_common.sh@10 -- # set +x 00:07:22.611 ************************************ 00:07:22.611 END TEST event_scheduler 00:07:22.611 ************************************ 00:07:22.611 23:20:43 -- event/event.sh@51 -- # modprobe -n nbd 00:07:22.611 23:20:43 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:22.611 23:20:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:22.611 23:20:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.611 23:20:43 -- common/autotest_common.sh@10 -- # set +x 00:07:22.611 ************************************ 00:07:22.611 START TEST app_repeat 00:07:22.611 ************************************ 00:07:22.611 23:20:43 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:07:22.611 23:20:43 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.611 23:20:43 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.611 23:20:43 -- event/event.sh@13 -- # local nbd_list 00:07:22.611 23:20:43 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:22.611 23:20:43 -- event/event.sh@14 -- # local bdev_list 00:07:22.611 23:20:43 -- event/event.sh@15 -- # local repeat_times=4 00:07:22.611 23:20:43 -- event/event.sh@17 -- # modprobe nbd 00:07:22.611 23:20:43 -- event/event.sh@19 -- # repeat_pid=133968 00:07:22.611 23:20:43 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:22.611 23:20:43 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:22.612 23:20:43 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 133968' 00:07:22.612 Process app_repeat pid: 133968 00:07:22.612 23:20:43 -- event/event.sh@23 -- # for i in {0..2} 00:07:22.612 23:20:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:22.612 spdk_app_start Round 0 00:07:22.612 23:20:43 -- event/event.sh@25 -- # waitforlisten 133968 /var/tmp/spdk-nbd.sock 00:07:22.612 23:20:43 -- common/autotest_common.sh@819 -- # '[' -z 133968 ']' 00:07:22.612 23:20:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.612 23:20:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:22.612 23:20:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.612 23:20:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:22.612 23:20:43 -- common/autotest_common.sh@10 -- # set +x 00:07:22.612 [2024-07-11 23:20:43.548465] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:22.612 [2024-07-11 23:20:43.548632] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133968 ] 00:07:22.870 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.870 [2024-07-11 23:20:43.639952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.870 [2024-07-11 23:20:43.736590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.870 [2024-07-11 23:20:43.736596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.802 23:20:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:23.802 23:20:44 -- common/autotest_common.sh@852 -- # return 0 00:07:23.802 23:20:44 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:24.061 Malloc0 00:07:24.061 23:20:44 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:24.319 Malloc1 00:07:24.319 23:20:45 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@12 -- # local i 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.319 23:20:45 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:24.884 /dev/nbd0 00:07:25.142 23:20:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:25.142 23:20:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:25.142 23:20:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:07:25.142 23:20:45 -- common/autotest_common.sh@857 -- # local i 00:07:25.142 23:20:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:25.142 23:20:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:25.142 23:20:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:07:25.142 23:20:45 -- common/autotest_common.sh@861 -- # break 00:07:25.142 23:20:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:25.142 23:20:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:25.142 23:20:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.142 1+0 records in 00:07:25.142 1+0 records out 00:07:25.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195869 s, 20.9 MB/s 00:07:25.142 23:20:45 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:25.142 23:20:45 -- common/autotest_common.sh@874 -- # size=4096 00:07:25.142 23:20:45 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:25.142 23:20:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:25.142 23:20:45 -- common/autotest_common.sh@877 -- # return 0 00:07:25.142 23:20:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.142 23:20:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.142 23:20:45 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:25.400 /dev/nbd1 00:07:25.400 23:20:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:25.400 23:20:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:25.400 23:20:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:07:25.400 23:20:46 -- common/autotest_common.sh@857 -- # local i 00:07:25.400 23:20:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:25.400 23:20:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:25.400 23:20:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:07:25.400 23:20:46 -- common/autotest_common.sh@861 -- # break 00:07:25.400 23:20:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:25.400 23:20:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:25.400 23:20:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.400 1+0 records in 00:07:25.400 1+0 records out 00:07:25.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238239 s, 17.2 MB/s 00:07:25.400 23:20:46 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:25.400 23:20:46 -- common/autotest_common.sh@874 -- # size=4096 00:07:25.657 23:20:46 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:25.657 23:20:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:25.657 23:20:46 -- common/autotest_common.sh@877 -- # return 0 00:07:25.657 23:20:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.657 23:20:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.657 23:20:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.657 23:20:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.657 23:20:46 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:25.914 { 00:07:25.914 "nbd_device": "/dev/nbd0", 00:07:25.914 "bdev_name": "Malloc0" 00:07:25.914 }, 00:07:25.914 { 00:07:25.914 "nbd_device": "/dev/nbd1", 00:07:25.914 "bdev_name": "Malloc1" 00:07:25.914 } 00:07:25.914 ]' 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:25.914 { 00:07:25.914 "nbd_device": "/dev/nbd0", 00:07:25.914 "bdev_name": "Malloc0" 00:07:25.914 }, 00:07:25.914 { 00:07:25.914 "nbd_device": "/dev/nbd1", 00:07:25.914 "bdev_name": "Malloc1" 00:07:25.914 } 00:07:25.914 ]' 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:25.914 /dev/nbd1' 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:25.914 /dev/nbd1' 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@65 -- # count=2 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@95 -- # count=2 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:25.914 256+0 records in 00:07:25.914 256+0 records out 00:07:25.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0054283 s, 193 MB/s 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:25.914 256+0 records in 00:07:25.914 256+0 records out 00:07:25.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024077 s, 43.6 MB/s 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:25.914 256+0 records in 00:07:25.914 256+0 records out 00:07:25.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02661 s, 39.4 MB/s 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@51 -- # local i 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.914 23:20:46 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:26.172 23:20:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:26.172 23:20:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:26.172 23:20:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:26.172 23:20:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.172 23:20:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.172 23:20:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:26.172 23:20:47 -- bdev/nbd_common.sh@41 -- # break 00:07:26.172 23:20:47 -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.172 23:20:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.441 23:20:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:26.702 23:20:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:26.702 23:20:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:26.702 23:20:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:26.702 23:20:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.702 23:20:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.702 23:20:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:26.702 23:20:47 -- bdev/nbd_common.sh@41 -- # break 00:07:26.702 23:20:47 -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.702 23:20:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.702 23:20:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.702 23:20:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.266 23:20:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:27.266 23:20:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:27.266 23:20:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.266 23:20:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:27.266 23:20:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:27.266 23:20:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.266 23:20:47 -- bdev/nbd_common.sh@65 -- # true 00:07:27.266 23:20:47 -- bdev/nbd_common.sh@65 -- # count=0 00:07:27.267 23:20:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:27.267 23:20:47 -- bdev/nbd_common.sh@104 -- # count=0 00:07:27.267 23:20:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:27.267 23:20:47 -- bdev/nbd_common.sh@109 -- # return 0 00:07:27.267 23:20:47 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:27.525 23:20:48 -- event/event.sh@35 -- # sleep 3 00:07:27.783 [2024-07-11 23:20:48.578002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.783 [2024-07-11 23:20:48.669231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.783 [2024-07-11 23:20:48.669231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.783 [2024-07-11 23:20:48.732817] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:27.783 [2024-07-11 23:20:48.732878] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:31.061 23:20:51 -- event/event.sh@23 -- # for i in {0..2} 00:07:31.061 23:20:51 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:31.061 spdk_app_start Round 1 00:07:31.061 23:20:51 -- event/event.sh@25 -- # waitforlisten 133968 /var/tmp/spdk-nbd.sock 00:07:31.061 23:20:51 -- common/autotest_common.sh@819 -- # '[' -z 133968 ']' 00:07:31.061 23:20:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:31.061 23:20:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:31.061 23:20:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:31.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:31.061 23:20:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:31.061 23:20:51 -- common/autotest_common.sh@10 -- # set +x 00:07:31.061 23:20:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:31.061 23:20:51 -- common/autotest_common.sh@852 -- # return 0 00:07:31.061 23:20:51 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.061 Malloc0 00:07:31.061 23:20:51 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.319 Malloc1 00:07:31.319 23:20:52 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@12 -- # local i 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.319 23:20:52 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:31.885 /dev/nbd0 00:07:31.885 23:20:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:31.885 23:20:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:31.885 23:20:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:07:31.885 23:20:52 -- common/autotest_common.sh@857 -- # local i 00:07:31.885 23:20:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:31.885 23:20:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:31.885 23:20:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:07:31.885 23:20:52 -- common/autotest_common.sh@861 -- # break 00:07:31.885 23:20:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:31.885 23:20:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:31.885 23:20:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:31.885 1+0 records in 00:07:31.885 1+0 records out 00:07:31.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179088 s, 22.9 MB/s 00:07:31.885 23:20:52 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:31.885 23:20:52 -- common/autotest_common.sh@874 -- # size=4096 00:07:31.885 23:20:52 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:31.885 23:20:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:31.885 23:20:52 -- common/autotest_common.sh@877 -- # return 0 00:07:31.885 23:20:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.885 23:20:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.885 23:20:52 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:32.148 /dev/nbd1 00:07:32.148 23:20:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:32.148 23:20:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:32.148 23:20:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:07:32.148 23:20:52 -- common/autotest_common.sh@857 -- # local i 00:07:32.148 23:20:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:32.148 23:20:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:32.148 23:20:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:07:32.148 23:20:52 -- common/autotest_common.sh@861 -- # break 00:07:32.148 23:20:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:32.148 23:20:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:32.148 23:20:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.148 1+0 records in 00:07:32.148 1+0 records out 00:07:32.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178852 s, 22.9 MB/s 00:07:32.148 23:20:52 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:32.148 23:20:52 -- common/autotest_common.sh@874 -- # size=4096 00:07:32.148 23:20:52 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:32.148 23:20:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:32.148 23:20:52 -- common/autotest_common.sh@877 -- # return 0 00:07:32.148 23:20:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.148 23:20:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.148 23:20:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.148 23:20:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.148 23:20:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:32.409 { 00:07:32.409 "nbd_device": "/dev/nbd0", 00:07:32.409 "bdev_name": "Malloc0" 00:07:32.409 }, 00:07:32.409 { 00:07:32.409 "nbd_device": "/dev/nbd1", 00:07:32.409 "bdev_name": "Malloc1" 00:07:32.409 } 00:07:32.409 ]' 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:32.409 { 00:07:32.409 "nbd_device": "/dev/nbd0", 00:07:32.409 "bdev_name": "Malloc0" 00:07:32.409 }, 00:07:32.409 { 00:07:32.409 "nbd_device": "/dev/nbd1", 00:07:32.409 "bdev_name": "Malloc1" 00:07:32.409 } 00:07:32.409 ]' 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:32.409 /dev/nbd1' 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:32.409 /dev/nbd1' 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@65 -- # count=2 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@95 -- # count=2 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:32.409 256+0 records in 00:07:32.409 256+0 records out 00:07:32.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452098 s, 232 MB/s 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.409 23:20:53 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:32.667 256+0 records in 00:07:32.667 256+0 records out 00:07:32.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236477 s, 44.3 MB/s 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:32.667 256+0 records in 00:07:32.667 256+0 records out 00:07:32.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249381 s, 42.0 MB/s 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@51 -- # local i 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.667 23:20:53 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:32.925 23:20:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:32.925 23:20:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:32.925 23:20:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:32.925 23:20:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.925 23:20:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.925 23:20:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:32.925 23:20:53 -- bdev/nbd_common.sh@41 -- # break 00:07:32.925 23:20:53 -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.926 23:20:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.926 23:20:53 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:33.184 23:20:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:33.184 23:20:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:33.184 23:20:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:33.184 23:20:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.184 23:20:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.184 23:20:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:33.184 23:20:54 -- bdev/nbd_common.sh@41 -- # break 00:07:33.184 23:20:54 -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.184 23:20:54 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:33.184 23:20:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.184 23:20:54 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.442 23:20:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:33.442 23:20:54 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:33.442 23:20:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.700 23:20:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:33.700 23:20:54 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:33.700 23:20:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.700 23:20:54 -- bdev/nbd_common.sh@65 -- # true 00:07:33.700 23:20:54 -- bdev/nbd_common.sh@65 -- # count=0 00:07:33.700 23:20:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:33.700 23:20:54 -- bdev/nbd_common.sh@104 -- # count=0 00:07:33.700 23:20:54 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:33.700 23:20:54 -- bdev/nbd_common.sh@109 -- # return 0 00:07:33.700 23:20:54 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:33.958 23:20:54 -- event/event.sh@35 -- # sleep 3 00:07:34.216 [2024-07-11 23:20:55.103415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:34.474 [2024-07-11 23:20:55.194251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.474 [2024-07-11 23:20:55.194257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.474 [2024-07-11 23:20:55.255833] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:34.474 [2024-07-11 23:20:55.255911] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:36.999 23:20:57 -- event/event.sh@23 -- # for i in {0..2} 00:07:36.999 23:20:57 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:36.999 spdk_app_start Round 2 00:07:36.999 23:20:57 -- event/event.sh@25 -- # waitforlisten 133968 /var/tmp/spdk-nbd.sock 00:07:36.999 23:20:57 -- common/autotest_common.sh@819 -- # '[' -z 133968 ']' 00:07:36.999 23:20:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:36.999 23:20:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:36.999 23:20:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:36.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:36.999 23:20:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:36.999 23:20:57 -- common/autotest_common.sh@10 -- # set +x 00:07:37.564 23:20:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:37.564 23:20:58 -- common/autotest_common.sh@852 -- # return 0 00:07:37.564 23:20:58 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:38.130 Malloc0 00:07:38.130 23:20:58 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:38.388 Malloc1 00:07:38.388 23:20:59 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@12 -- # local i 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.388 23:20:59 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:38.953 /dev/nbd0 00:07:38.953 23:20:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:38.953 23:20:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:38.953 23:20:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:07:38.953 23:20:59 -- common/autotest_common.sh@857 -- # local i 00:07:38.953 23:20:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:38.953 23:20:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:38.953 23:20:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:07:38.953 23:20:59 -- common/autotest_common.sh@861 -- # break 00:07:38.953 23:20:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:38.953 23:20:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:38.953 23:20:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:38.953 1+0 records in 00:07:38.953 1+0 records out 00:07:38.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357324 s, 11.5 MB/s 00:07:38.953 23:20:59 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:38.953 23:20:59 -- common/autotest_common.sh@874 -- # size=4096 00:07:38.953 23:20:59 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:38.953 23:20:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:38.953 23:20:59 -- common/autotest_common.sh@877 -- # return 0 00:07:38.953 23:20:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.953 23:20:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.953 23:20:59 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:39.212 /dev/nbd1 00:07:39.212 23:21:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:39.212 23:21:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:39.212 23:21:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:07:39.212 23:21:00 -- common/autotest_common.sh@857 -- # local i 00:07:39.212 23:21:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:39.212 23:21:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:39.212 23:21:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:07:39.212 23:21:00 -- common/autotest_common.sh@861 -- # break 00:07:39.212 23:21:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:39.212 23:21:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:39.212 23:21:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:39.212 1+0 records in 00:07:39.212 1+0 records out 00:07:39.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025402 s, 16.1 MB/s 00:07:39.212 23:21:00 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:39.212 23:21:00 -- common/autotest_common.sh@874 -- # size=4096 00:07:39.212 23:21:00 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:39.212 23:21:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:39.212 23:21:00 -- common/autotest_common.sh@877 -- # return 0 00:07:39.212 23:21:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.212 23:21:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.212 23:21:00 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.212 23:21:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.212 23:21:00 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.777 23:21:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:39.777 { 00:07:39.777 "nbd_device": "/dev/nbd0", 00:07:39.777 "bdev_name": "Malloc0" 00:07:39.777 }, 00:07:39.777 { 00:07:39.777 "nbd_device": "/dev/nbd1", 00:07:39.777 "bdev_name": "Malloc1" 00:07:39.777 } 00:07:39.777 ]' 00:07:39.777 23:21:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:39.777 { 00:07:39.777 "nbd_device": "/dev/nbd0", 00:07:39.777 "bdev_name": "Malloc0" 00:07:39.777 }, 00:07:39.777 { 00:07:39.778 "nbd_device": "/dev/nbd1", 00:07:39.778 "bdev_name": "Malloc1" 00:07:39.778 } 00:07:39.778 ]' 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:39.778 /dev/nbd1' 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:39.778 /dev/nbd1' 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@65 -- # count=2 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@95 -- # count=2 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:39.778 256+0 records in 00:07:39.778 256+0 records out 00:07:39.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00792805 s, 132 MB/s 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:39.778 256+0 records in 00:07:39.778 256+0 records out 00:07:39.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274198 s, 38.2 MB/s 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:39.778 256+0 records in 00:07:39.778 256+0 records out 00:07:39.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290133 s, 36.1 MB/s 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@51 -- # local i 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.778 23:21:00 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:40.343 23:21:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:40.343 23:21:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:40.343 23:21:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:40.343 23:21:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.343 23:21:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.343 23:21:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:40.343 23:21:01 -- bdev/nbd_common.sh@41 -- # break 00:07:40.343 23:21:01 -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.343 23:21:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.343 23:21:01 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:40.908 23:21:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:40.908 23:21:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:40.908 23:21:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:40.908 23:21:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.908 23:21:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.908 23:21:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:40.908 23:21:01 -- bdev/nbd_common.sh@41 -- # break 00:07:40.908 23:21:01 -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.908 23:21:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:40.908 23:21:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.908 23:21:01 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.512 23:21:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:41.512 23:21:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:41.512 23:21:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.512 23:21:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:41.512 23:21:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:41.512 23:21:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.512 23:21:02 -- bdev/nbd_common.sh@65 -- # true 00:07:41.512 23:21:02 -- bdev/nbd_common.sh@65 -- # count=0 00:07:41.512 23:21:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:41.512 23:21:02 -- bdev/nbd_common.sh@104 -- # count=0 00:07:41.512 23:21:02 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:41.512 23:21:02 -- bdev/nbd_common.sh@109 -- # return 0 00:07:41.512 23:21:02 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:41.771 23:21:02 -- event/event.sh@35 -- # sleep 3 00:07:42.029 [2024-07-11 23:21:02.802681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:42.029 [2024-07-11 23:21:02.891654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.029 [2024-07-11 23:21:02.891655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.029 [2024-07-11 23:21:02.954221] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:42.029 [2024-07-11 23:21:02.954306] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:45.308 23:21:05 -- event/event.sh@38 -- # waitforlisten 133968 /var/tmp/spdk-nbd.sock 00:07:45.308 23:21:05 -- common/autotest_common.sh@819 -- # '[' -z 133968 ']' 00:07:45.308 23:21:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:45.308 23:21:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:45.308 23:21:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:45.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:45.308 23:21:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:45.308 23:21:05 -- common/autotest_common.sh@10 -- # set +x 00:07:45.308 23:21:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:45.308 23:21:05 -- common/autotest_common.sh@852 -- # return 0 00:07:45.308 23:21:05 -- event/event.sh@39 -- # killprocess 133968 00:07:45.308 23:21:05 -- common/autotest_common.sh@926 -- # '[' -z 133968 ']' 00:07:45.308 23:21:05 -- common/autotest_common.sh@930 -- # kill -0 133968 00:07:45.308 23:21:05 -- common/autotest_common.sh@931 -- # uname 00:07:45.308 23:21:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:45.308 23:21:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133968 00:07:45.308 23:21:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:45.308 23:21:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:45.308 23:21:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133968' 00:07:45.308 killing process with pid 133968 00:07:45.308 23:21:06 -- common/autotest_common.sh@945 -- # kill 133968 00:07:45.308 23:21:06 -- common/autotest_common.sh@950 -- # wait 133968 00:07:45.308 spdk_app_start is called in Round 0. 00:07:45.308 Shutdown signal received, stop current app iteration 00:07:45.308 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:07:45.308 spdk_app_start is called in Round 1. 00:07:45.308 Shutdown signal received, stop current app iteration 00:07:45.308 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:07:45.308 spdk_app_start is called in Round 2. 00:07:45.308 Shutdown signal received, stop current app iteration 00:07:45.308 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:07:45.308 spdk_app_start is called in Round 3. 00:07:45.308 Shutdown signal received, stop current app iteration 00:07:45.566 23:21:06 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:45.566 23:21:06 -- event/event.sh@42 -- # return 0 00:07:45.567 00:07:45.567 real 0m22.751s 00:07:45.567 user 0m51.659s 00:07:45.567 sys 0m4.393s 00:07:45.567 23:21:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.567 23:21:06 -- common/autotest_common.sh@10 -- # set +x 00:07:45.567 ************************************ 00:07:45.567 END TEST app_repeat 00:07:45.567 ************************************ 00:07:45.567 23:21:06 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:45.567 23:21:06 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:45.567 23:21:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.567 23:21:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.567 23:21:06 -- common/autotest_common.sh@10 -- # set +x 00:07:45.567 ************************************ 00:07:45.567 START TEST cpu_locks 00:07:45.567 ************************************ 00:07:45.567 23:21:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:45.567 * Looking for test storage... 00:07:45.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:45.567 23:21:06 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:45.567 23:21:06 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:45.567 23:21:06 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:45.567 23:21:06 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:45.567 23:21:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.567 23:21:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.567 23:21:06 -- common/autotest_common.sh@10 -- # set +x 00:07:45.567 ************************************ 00:07:45.567 START TEST default_locks 00:07:45.567 ************************************ 00:07:45.567 23:21:06 -- common/autotest_common.sh@1104 -- # default_locks 00:07:45.567 23:21:06 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=137028 00:07:45.567 23:21:06 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:45.567 23:21:06 -- event/cpu_locks.sh@47 -- # waitforlisten 137028 00:07:45.567 23:21:06 -- common/autotest_common.sh@819 -- # '[' -z 137028 ']' 00:07:45.567 23:21:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.567 23:21:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:45.567 23:21:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.567 23:21:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:45.567 23:21:06 -- common/autotest_common.sh@10 -- # set +x 00:07:45.567 [2024-07-11 23:21:06.433043] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:45.567 [2024-07-11 23:21:06.433225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137028 ] 00:07:45.567 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.825 [2024-07-11 23:21:06.529026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.825 [2024-07-11 23:21:06.622331] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:45.825 [2024-07-11 23:21:06.622511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.760 23:21:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:46.760 23:21:07 -- common/autotest_common.sh@852 -- # return 0 00:07:46.760 23:21:07 -- event/cpu_locks.sh@49 -- # locks_exist 137028 00:07:46.760 23:21:07 -- event/cpu_locks.sh@22 -- # lslocks -p 137028 00:07:46.760 23:21:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:47.018 lslocks: write error 00:07:47.018 23:21:07 -- event/cpu_locks.sh@50 -- # killprocess 137028 00:07:47.018 23:21:07 -- common/autotest_common.sh@926 -- # '[' -z 137028 ']' 00:07:47.018 23:21:07 -- common/autotest_common.sh@930 -- # kill -0 137028 00:07:47.018 23:21:07 -- common/autotest_common.sh@931 -- # uname 00:07:47.018 23:21:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:47.018 23:21:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137028 00:07:47.018 23:21:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:47.018 23:21:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:47.018 23:21:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137028' 00:07:47.018 killing process with pid 137028 00:07:47.018 23:21:07 -- common/autotest_common.sh@945 -- # kill 137028 00:07:47.018 23:21:07 -- common/autotest_common.sh@950 -- # wait 137028 00:07:47.279 23:21:08 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 137028 00:07:47.279 23:21:08 -- common/autotest_common.sh@640 -- # local es=0 00:07:47.279 23:21:08 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 137028 00:07:47.279 23:21:08 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:47.279 23:21:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:47.279 23:21:08 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:47.279 23:21:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:47.279 23:21:08 -- common/autotest_common.sh@643 -- # waitforlisten 137028 00:07:47.279 23:21:08 -- common/autotest_common.sh@819 -- # '[' -z 137028 ']' 00:07:47.279 23:21:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.279 23:21:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:47.279 23:21:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.279 23:21:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:47.279 23:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:47.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (137028) - No such process 00:07:47.279 ERROR: process (pid: 137028) is no longer running 00:07:47.279 23:21:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:47.279 23:21:08 -- common/autotest_common.sh@852 -- # return 1 00:07:47.279 23:21:08 -- common/autotest_common.sh@643 -- # es=1 00:07:47.279 23:21:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:47.279 23:21:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:47.279 23:21:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:47.279 23:21:08 -- event/cpu_locks.sh@54 -- # no_locks 00:07:47.279 23:21:08 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:47.279 23:21:08 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:47.279 23:21:08 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:47.279 00:07:47.279 real 0m1.811s 00:07:47.279 user 0m1.960s 00:07:47.279 sys 0m0.646s 00:07:47.279 23:21:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.279 23:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:47.279 ************************************ 00:07:47.279 END TEST default_locks 00:07:47.279 ************************************ 00:07:47.279 23:21:08 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:47.279 23:21:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:47.279 23:21:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.279 23:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:47.279 ************************************ 00:07:47.279 START TEST default_locks_via_rpc 00:07:47.279 ************************************ 00:07:47.279 23:21:08 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:07:47.279 23:21:08 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=137441 00:07:47.279 23:21:08 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:47.279 23:21:08 -- event/cpu_locks.sh@63 -- # waitforlisten 137441 00:07:47.279 23:21:08 -- common/autotest_common.sh@819 -- # '[' -z 137441 ']' 00:07:47.279 23:21:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.279 23:21:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:47.279 23:21:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.279 23:21:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:47.279 23:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:47.538 [2024-07-11 23:21:08.305877] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:47.538 [2024-07-11 23:21:08.306059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137441 ] 00:07:47.538 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.538 [2024-07-11 23:21:08.404587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.797 [2024-07-11 23:21:08.499976] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:47.797 [2024-07-11 23:21:08.500154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.731 23:21:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:48.731 23:21:09 -- common/autotest_common.sh@852 -- # return 0 00:07:48.731 23:21:09 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:48.731 23:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.731 23:21:09 -- common/autotest_common.sh@10 -- # set +x 00:07:48.731 23:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.731 23:21:09 -- event/cpu_locks.sh@67 -- # no_locks 00:07:48.731 23:21:09 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:48.731 23:21:09 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:48.731 23:21:09 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:48.731 23:21:09 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:48.731 23:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.731 23:21:09 -- common/autotest_common.sh@10 -- # set +x 00:07:48.731 23:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.731 23:21:09 -- event/cpu_locks.sh@71 -- # locks_exist 137441 00:07:48.731 23:21:09 -- event/cpu_locks.sh@22 -- # lslocks -p 137441 00:07:48.731 23:21:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:48.731 23:21:09 -- event/cpu_locks.sh@73 -- # killprocess 137441 00:07:48.731 23:21:09 -- common/autotest_common.sh@926 -- # '[' -z 137441 ']' 00:07:48.731 23:21:09 -- common/autotest_common.sh@930 -- # kill -0 137441 00:07:48.731 23:21:09 -- common/autotest_common.sh@931 -- # uname 00:07:48.731 23:21:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:48.731 23:21:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137441 00:07:48.989 23:21:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:48.989 23:21:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:48.989 23:21:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137441' 00:07:48.989 killing process with pid 137441 00:07:48.989 23:21:09 -- common/autotest_common.sh@945 -- # kill 137441 00:07:48.989 23:21:09 -- common/autotest_common.sh@950 -- # wait 137441 00:07:49.249 00:07:49.249 real 0m1.892s 00:07:49.249 user 0m2.086s 00:07:49.249 sys 0m0.647s 00:07:49.249 23:21:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.249 23:21:10 -- common/autotest_common.sh@10 -- # set +x 00:07:49.249 ************************************ 00:07:49.249 END TEST default_locks_via_rpc 00:07:49.249 ************************************ 00:07:49.249 23:21:10 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:49.249 23:21:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:49.249 23:21:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.249 23:21:10 -- common/autotest_common.sh@10 -- # set +x 00:07:49.249 ************************************ 00:07:49.249 START TEST non_locking_app_on_locked_coremask 00:07:49.249 ************************************ 00:07:49.249 23:21:10 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:07:49.249 23:21:10 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=137962 00:07:49.249 23:21:10 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:49.249 23:21:10 -- event/cpu_locks.sh@81 -- # waitforlisten 137962 /var/tmp/spdk.sock 00:07:49.249 23:21:10 -- common/autotest_common.sh@819 -- # '[' -z 137962 ']' 00:07:49.249 23:21:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.249 23:21:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:49.249 23:21:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.249 23:21:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:49.249 23:21:10 -- common/autotest_common.sh@10 -- # set +x 00:07:49.249 [2024-07-11 23:21:10.189809] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:49.249 [2024-07-11 23:21:10.189886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137962 ] 00:07:49.508 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.508 [2024-07-11 23:21:10.257312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.508 [2024-07-11 23:21:10.352897] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:49.508 [2024-07-11 23:21:10.353076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.881 23:21:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:50.881 23:21:11 -- common/autotest_common.sh@852 -- # return 0 00:07:50.881 23:21:11 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=138271 00:07:50.881 23:21:11 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:50.881 23:21:11 -- event/cpu_locks.sh@85 -- # waitforlisten 138271 /var/tmp/spdk2.sock 00:07:50.881 23:21:11 -- common/autotest_common.sh@819 -- # '[' -z 138271 ']' 00:07:50.881 23:21:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.881 23:21:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:50.881 23:21:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.881 23:21:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:50.881 23:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:50.881 [2024-07-11 23:21:11.512598] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:50.881 [2024-07-11 23:21:11.512694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138271 ] 00:07:50.881 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.881 [2024-07-11 23:21:11.615367] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:50.881 [2024-07-11 23:21:11.615414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.881 [2024-07-11 23:21:11.802723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:50.881 [2024-07-11 23:21:11.802916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.815 23:21:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:51.815 23:21:12 -- common/autotest_common.sh@852 -- # return 0 00:07:51.815 23:21:12 -- event/cpu_locks.sh@87 -- # locks_exist 137962 00:07:51.815 23:21:12 -- event/cpu_locks.sh@22 -- # lslocks -p 137962 00:07:51.815 23:21:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:52.747 lslocks: write error 00:07:52.747 23:21:13 -- event/cpu_locks.sh@89 -- # killprocess 137962 00:07:52.747 23:21:13 -- common/autotest_common.sh@926 -- # '[' -z 137962 ']' 00:07:52.747 23:21:13 -- common/autotest_common.sh@930 -- # kill -0 137962 00:07:52.747 23:21:13 -- common/autotest_common.sh@931 -- # uname 00:07:52.747 23:21:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:52.747 23:21:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137962 00:07:52.747 23:21:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:52.747 23:21:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:52.747 23:21:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137962' 00:07:52.747 killing process with pid 137962 00:07:52.747 23:21:13 -- common/autotest_common.sh@945 -- # kill 137962 00:07:52.747 23:21:13 -- common/autotest_common.sh@950 -- # wait 137962 00:07:53.678 23:21:14 -- event/cpu_locks.sh@90 -- # killprocess 138271 00:07:53.678 23:21:14 -- common/autotest_common.sh@926 -- # '[' -z 138271 ']' 00:07:53.678 23:21:14 -- common/autotest_common.sh@930 -- # kill -0 138271 00:07:53.678 23:21:14 -- common/autotest_common.sh@931 -- # uname 00:07:53.678 23:21:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:53.678 23:21:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138271 00:07:53.678 23:21:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:53.678 23:21:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:53.678 23:21:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138271' 00:07:53.678 killing process with pid 138271 00:07:53.678 23:21:14 -- common/autotest_common.sh@945 -- # kill 138271 00:07:53.678 23:21:14 -- common/autotest_common.sh@950 -- # wait 138271 00:07:53.936 00:07:53.937 real 0m4.628s 00:07:53.937 user 0m5.470s 00:07:53.937 sys 0m1.280s 00:07:53.937 23:21:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.937 23:21:14 -- common/autotest_common.sh@10 -- # set +x 00:07:53.937 ************************************ 00:07:53.937 END TEST non_locking_app_on_locked_coremask 00:07:53.937 ************************************ 00:07:53.937 23:21:14 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:53.937 23:21:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:53.937 23:21:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.937 23:21:14 -- common/autotest_common.sh@10 -- # set +x 00:07:53.937 ************************************ 00:07:53.937 START TEST locking_app_on_unlocked_coremask 00:07:53.937 ************************************ 00:07:53.937 23:21:14 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:07:53.937 23:21:14 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=138641 00:07:53.937 23:21:14 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:53.937 23:21:14 -- event/cpu_locks.sh@99 -- # waitforlisten 138641 /var/tmp/spdk.sock 00:07:53.937 23:21:14 -- common/autotest_common.sh@819 -- # '[' -z 138641 ']' 00:07:53.937 23:21:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.937 23:21:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:53.937 23:21:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.937 23:21:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:53.937 23:21:14 -- common/autotest_common.sh@10 -- # set +x 00:07:53.937 [2024-07-11 23:21:14.857335] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:53.937 [2024-07-11 23:21:14.857443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138641 ] 00:07:54.196 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.196 [2024-07-11 23:21:14.928538] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:54.196 [2024-07-11 23:21:14.928581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.196 [2024-07-11 23:21:15.020006] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.196 [2024-07-11 23:21:15.020204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.129 23:21:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:55.129 23:21:15 -- common/autotest_common.sh@852 -- # return 0 00:07:55.129 23:21:15 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=138735 00:07:55.129 23:21:15 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:55.129 23:21:15 -- event/cpu_locks.sh@103 -- # waitforlisten 138735 /var/tmp/spdk2.sock 00:07:55.129 23:21:15 -- common/autotest_common.sh@819 -- # '[' -z 138735 ']' 00:07:55.129 23:21:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.129 23:21:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:55.129 23:21:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.129 23:21:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:55.129 23:21:15 -- common/autotest_common.sh@10 -- # set +x 00:07:55.129 [2024-07-11 23:21:15.943576] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:55.129 [2024-07-11 23:21:15.943680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138735 ] 00:07:55.129 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.129 [2024-07-11 23:21:16.054319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.387 [2024-07-11 23:21:16.238985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:55.387 [2024-07-11 23:21:16.243192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.319 23:21:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:56.319 23:21:17 -- common/autotest_common.sh@852 -- # return 0 00:07:56.319 23:21:17 -- event/cpu_locks.sh@105 -- # locks_exist 138735 00:07:56.319 23:21:17 -- event/cpu_locks.sh@22 -- # lslocks -p 138735 00:07:56.319 23:21:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:56.884 lslocks: write error 00:07:56.884 23:21:17 -- event/cpu_locks.sh@107 -- # killprocess 138641 00:07:56.884 23:21:17 -- common/autotest_common.sh@926 -- # '[' -z 138641 ']' 00:07:56.884 23:21:17 -- common/autotest_common.sh@930 -- # kill -0 138641 00:07:56.884 23:21:17 -- common/autotest_common.sh@931 -- # uname 00:07:57.141 23:21:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:57.141 23:21:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138641 00:07:57.141 23:21:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:57.141 23:21:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:57.141 23:21:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138641' 00:07:57.141 killing process with pid 138641 00:07:57.141 23:21:17 -- common/autotest_common.sh@945 -- # kill 138641 00:07:57.141 23:21:17 -- common/autotest_common.sh@950 -- # wait 138641 00:07:58.075 23:21:18 -- event/cpu_locks.sh@108 -- # killprocess 138735 00:07:58.075 23:21:18 -- common/autotest_common.sh@926 -- # '[' -z 138735 ']' 00:07:58.075 23:21:18 -- common/autotest_common.sh@930 -- # kill -0 138735 00:07:58.075 23:21:18 -- common/autotest_common.sh@931 -- # uname 00:07:58.075 23:21:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:58.075 23:21:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138735 00:07:58.075 23:21:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:58.075 23:21:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:58.075 23:21:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138735' 00:07:58.075 killing process with pid 138735 00:07:58.075 23:21:18 -- common/autotest_common.sh@945 -- # kill 138735 00:07:58.075 23:21:18 -- common/autotest_common.sh@950 -- # wait 138735 00:07:58.334 00:07:58.334 real 0m4.406s 00:07:58.334 user 0m5.128s 00:07:58.334 sys 0m1.278s 00:07:58.334 23:21:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.334 23:21:19 -- common/autotest_common.sh@10 -- # set +x 00:07:58.334 ************************************ 00:07:58.334 END TEST locking_app_on_unlocked_coremask 00:07:58.334 ************************************ 00:07:58.334 23:21:19 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:58.334 23:21:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:58.334 23:21:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.334 23:21:19 -- common/autotest_common.sh@10 -- # set +x 00:07:58.334 ************************************ 00:07:58.334 START TEST locking_app_on_locked_coremask 00:07:58.334 ************************************ 00:07:58.334 23:21:19 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:07:58.334 23:21:19 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=139171 00:07:58.335 23:21:19 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:58.335 23:21:19 -- event/cpu_locks.sh@116 -- # waitforlisten 139171 /var/tmp/spdk.sock 00:07:58.335 23:21:19 -- common/autotest_common.sh@819 -- # '[' -z 139171 ']' 00:07:58.335 23:21:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.335 23:21:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:58.335 23:21:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.335 23:21:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:58.335 23:21:19 -- common/autotest_common.sh@10 -- # set +x 00:07:58.592 [2024-07-11 23:21:19.339956] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:58.592 [2024-07-11 23:21:19.340161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139171 ] 00:07:58.592 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.592 [2024-07-11 23:21:19.422969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.592 [2024-07-11 23:21:19.518667] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:58.592 [2024-07-11 23:21:19.518841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.527 23:21:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:59.527 23:21:20 -- common/autotest_common.sh@852 -- # return 0 00:07:59.527 23:21:20 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=139311 00:07:59.527 23:21:20 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:59.527 23:21:20 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 139311 /var/tmp/spdk2.sock 00:07:59.527 23:21:20 -- common/autotest_common.sh@640 -- # local es=0 00:07:59.527 23:21:20 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 139311 /var/tmp/spdk2.sock 00:07:59.527 23:21:20 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:59.527 23:21:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.527 23:21:20 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:59.527 23:21:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.527 23:21:20 -- common/autotest_common.sh@643 -- # waitforlisten 139311 /var/tmp/spdk2.sock 00:07:59.527 23:21:20 -- common/autotest_common.sh@819 -- # '[' -z 139311 ']' 00:07:59.527 23:21:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:59.527 23:21:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:59.527 23:21:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:59.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:59.527 23:21:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:59.527 23:21:20 -- common/autotest_common.sh@10 -- # set +x 00:07:59.527 [2024-07-11 23:21:20.415032] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:59.527 [2024-07-11 23:21:20.415125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139311 ] 00:07:59.527 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.784 [2024-07-11 23:21:20.512561] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 139171 has claimed it. 00:07:59.784 [2024-07-11 23:21:20.512612] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:00.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (139311) - No such process 00:08:00.349 ERROR: process (pid: 139311) is no longer running 00:08:00.349 23:21:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:00.349 23:21:21 -- common/autotest_common.sh@852 -- # return 1 00:08:00.349 23:21:21 -- common/autotest_common.sh@643 -- # es=1 00:08:00.349 23:21:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:00.349 23:21:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:00.349 23:21:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:00.349 23:21:21 -- event/cpu_locks.sh@122 -- # locks_exist 139171 00:08:00.349 23:21:21 -- event/cpu_locks.sh@22 -- # lslocks -p 139171 00:08:00.349 23:21:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:00.919 lslocks: write error 00:08:00.919 23:21:21 -- event/cpu_locks.sh@124 -- # killprocess 139171 00:08:00.919 23:21:21 -- common/autotest_common.sh@926 -- # '[' -z 139171 ']' 00:08:00.919 23:21:21 -- common/autotest_common.sh@930 -- # kill -0 139171 00:08:00.919 23:21:21 -- common/autotest_common.sh@931 -- # uname 00:08:00.919 23:21:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:00.919 23:21:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 139171 00:08:00.919 23:21:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:00.919 23:21:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:00.919 23:21:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 139171' 00:08:00.919 killing process with pid 139171 00:08:00.919 23:21:21 -- common/autotest_common.sh@945 -- # kill 139171 00:08:00.919 23:21:21 -- common/autotest_common.sh@950 -- # wait 139171 00:08:01.516 00:08:01.517 real 0m3.045s 00:08:01.517 user 0m3.630s 00:08:01.517 sys 0m0.910s 00:08:01.517 23:21:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.517 23:21:22 -- common/autotest_common.sh@10 -- # set +x 00:08:01.517 ************************************ 00:08:01.517 END TEST locking_app_on_locked_coremask 00:08:01.517 ************************************ 00:08:01.517 23:21:22 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:01.517 23:21:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:01.517 23:21:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.517 23:21:22 -- common/autotest_common.sh@10 -- # set +x 00:08:01.517 ************************************ 00:08:01.517 START TEST locking_overlapped_coremask 00:08:01.517 ************************************ 00:08:01.517 23:21:22 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:08:01.517 23:21:22 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=139613 00:08:01.517 23:21:22 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:01.517 23:21:22 -- event/cpu_locks.sh@133 -- # waitforlisten 139613 /var/tmp/spdk.sock 00:08:01.517 23:21:22 -- common/autotest_common.sh@819 -- # '[' -z 139613 ']' 00:08:01.517 23:21:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.517 23:21:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:01.517 23:21:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.517 23:21:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:01.517 23:21:22 -- common/autotest_common.sh@10 -- # set +x 00:08:01.517 [2024-07-11 23:21:22.421468] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:01.517 [2024-07-11 23:21:22.421647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139613 ] 00:08:01.776 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.776 [2024-07-11 23:21:22.534466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.776 [2024-07-11 23:21:22.628026] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:01.776 [2024-07-11 23:21:22.628278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.776 [2024-07-11 23:21:22.628315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.776 [2024-07-11 23:21:22.628317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.707 23:21:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:02.707 23:21:23 -- common/autotest_common.sh@852 -- # return 0 00:08:02.707 23:21:23 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=139758 00:08:02.707 23:21:23 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 139758 /var/tmp/spdk2.sock 00:08:02.707 23:21:23 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:02.707 23:21:23 -- common/autotest_common.sh@640 -- # local es=0 00:08:02.707 23:21:23 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 139758 /var/tmp/spdk2.sock 00:08:02.707 23:21:23 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:08:02.707 23:21:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:02.707 23:21:23 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:08:02.707 23:21:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:02.707 23:21:23 -- common/autotest_common.sh@643 -- # waitforlisten 139758 /var/tmp/spdk2.sock 00:08:02.707 23:21:23 -- common/autotest_common.sh@819 -- # '[' -z 139758 ']' 00:08:02.707 23:21:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:02.707 23:21:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:02.707 23:21:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:02.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:02.707 23:21:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:02.707 23:21:23 -- common/autotest_common.sh@10 -- # set +x 00:08:02.964 [2024-07-11 23:21:23.714433] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:02.964 [2024-07-11 23:21:23.714547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139758 ] 00:08:02.964 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.964 [2024-07-11 23:21:23.816675] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 139613 has claimed it. 00:08:02.965 [2024-07-11 23:21:23.816734] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:03.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (139758) - No such process 00:08:03.895 ERROR: process (pid: 139758) is no longer running 00:08:03.895 23:21:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:03.895 23:21:24 -- common/autotest_common.sh@852 -- # return 1 00:08:03.895 23:21:24 -- common/autotest_common.sh@643 -- # es=1 00:08:03.895 23:21:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:03.895 23:21:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:03.895 23:21:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:03.895 23:21:24 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:03.895 23:21:24 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:03.895 23:21:24 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:03.895 23:21:24 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:03.895 23:21:24 -- event/cpu_locks.sh@141 -- # killprocess 139613 00:08:03.895 23:21:24 -- common/autotest_common.sh@926 -- # '[' -z 139613 ']' 00:08:03.895 23:21:24 -- common/autotest_common.sh@930 -- # kill -0 139613 00:08:03.895 23:21:24 -- common/autotest_common.sh@931 -- # uname 00:08:03.895 23:21:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:03.895 23:21:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 139613 00:08:03.895 23:21:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:03.895 23:21:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:03.895 23:21:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 139613' 00:08:03.895 killing process with pid 139613 00:08:03.895 23:21:24 -- common/autotest_common.sh@945 -- # kill 139613 00:08:03.895 23:21:24 -- common/autotest_common.sh@950 -- # wait 139613 00:08:04.153 00:08:04.153 real 0m2.696s 00:08:04.153 user 0m7.983s 00:08:04.153 sys 0m0.592s 00:08:04.153 23:21:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.153 23:21:25 -- common/autotest_common.sh@10 -- # set +x 00:08:04.153 ************************************ 00:08:04.153 END TEST locking_overlapped_coremask 00:08:04.153 ************************************ 00:08:04.154 23:21:25 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:04.154 23:21:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:04.154 23:21:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.154 23:21:25 -- common/autotest_common.sh@10 -- # set +x 00:08:04.154 ************************************ 00:08:04.154 START TEST locking_overlapped_coremask_via_rpc 00:08:04.154 ************************************ 00:08:04.154 23:21:25 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:08:04.154 23:21:25 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=139928 00:08:04.154 23:21:25 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:04.154 23:21:25 -- event/cpu_locks.sh@149 -- # waitforlisten 139928 /var/tmp/spdk.sock 00:08:04.154 23:21:25 -- common/autotest_common.sh@819 -- # '[' -z 139928 ']' 00:08:04.154 23:21:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.154 23:21:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:04.154 23:21:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.154 23:21:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:04.154 23:21:25 -- common/autotest_common.sh@10 -- # set +x 00:08:04.154 [2024-07-11 23:21:25.097319] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:04.154 [2024-07-11 23:21:25.097420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139928 ] 00:08:04.412 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.412 [2024-07-11 23:21:25.168312] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:04.412 [2024-07-11 23:21:25.168355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:04.412 [2024-07-11 23:21:25.261886] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:04.412 [2024-07-11 23:21:25.262128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.412 [2024-07-11 23:21:25.262171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.412 [2024-07-11 23:21:25.262176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.785 23:21:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:05.785 23:21:26 -- common/autotest_common.sh@852 -- # return 0 00:08:05.785 23:21:26 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=140159 00:08:05.785 23:21:26 -- event/cpu_locks.sh@153 -- # waitforlisten 140159 /var/tmp/spdk2.sock 00:08:05.785 23:21:26 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:05.785 23:21:26 -- common/autotest_common.sh@819 -- # '[' -z 140159 ']' 00:08:05.785 23:21:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:05.785 23:21:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:05.785 23:21:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:05.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:05.785 23:21:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:05.785 23:21:26 -- common/autotest_common.sh@10 -- # set +x 00:08:05.785 [2024-07-11 23:21:26.388541] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:05.785 [2024-07-11 23:21:26.388648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140159 ] 00:08:05.785 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.785 [2024-07-11 23:21:26.488688] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:05.785 [2024-07-11 23:21:26.488729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.785 [2024-07-11 23:21:26.656711] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:05.785 [2024-07-11 23:21:26.657047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.785 [2024-07-11 23:21:26.657109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:05.785 [2024-07-11 23:21:26.657114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.719 23:21:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:06.719 23:21:27 -- common/autotest_common.sh@852 -- # return 0 00:08:06.719 23:21:27 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:06.719 23:21:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.719 23:21:27 -- common/autotest_common.sh@10 -- # set +x 00:08:06.719 23:21:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.719 23:21:27 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:06.719 23:21:27 -- common/autotest_common.sh@640 -- # local es=0 00:08:06.719 23:21:27 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:06.719 23:21:27 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:08:06.719 23:21:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:06.719 23:21:27 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:08:06.719 23:21:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:06.719 23:21:27 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:06.719 23:21:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.719 23:21:27 -- common/autotest_common.sh@10 -- # set +x 00:08:06.719 [2024-07-11 23:21:27.457236] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 139928 has claimed it. 00:08:06.719 request: 00:08:06.719 { 00:08:06.719 "method": "framework_enable_cpumask_locks", 00:08:06.719 "req_id": 1 00:08:06.719 } 00:08:06.719 Got JSON-RPC error response 00:08:06.719 response: 00:08:06.719 { 00:08:06.719 "code": -32603, 00:08:06.719 "message": "Failed to claim CPU core: 2" 00:08:06.719 } 00:08:06.719 23:21:27 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:08:06.719 23:21:27 -- common/autotest_common.sh@643 -- # es=1 00:08:06.719 23:21:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:06.719 23:21:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:06.719 23:21:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:06.719 23:21:27 -- event/cpu_locks.sh@158 -- # waitforlisten 139928 /var/tmp/spdk.sock 00:08:06.719 23:21:27 -- common/autotest_common.sh@819 -- # '[' -z 139928 ']' 00:08:06.719 23:21:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.719 23:21:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:06.719 23:21:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.719 23:21:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:06.719 23:21:27 -- common/autotest_common.sh@10 -- # set +x 00:08:06.977 23:21:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:06.977 23:21:27 -- common/autotest_common.sh@852 -- # return 0 00:08:06.977 23:21:27 -- event/cpu_locks.sh@159 -- # waitforlisten 140159 /var/tmp/spdk2.sock 00:08:06.977 23:21:27 -- common/autotest_common.sh@819 -- # '[' -z 140159 ']' 00:08:06.977 23:21:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:06.977 23:21:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:06.977 23:21:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:06.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:06.977 23:21:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:06.977 23:21:27 -- common/autotest_common.sh@10 -- # set +x 00:08:07.236 23:21:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:07.236 23:21:28 -- common/autotest_common.sh@852 -- # return 0 00:08:07.236 23:21:28 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:07.236 23:21:28 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:07.236 23:21:28 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:07.236 23:21:28 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:07.236 00:08:07.236 real 0m3.003s 00:08:07.236 user 0m1.674s 00:08:07.236 sys 0m0.253s 00:08:07.236 23:21:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.236 23:21:28 -- common/autotest_common.sh@10 -- # set +x 00:08:07.236 ************************************ 00:08:07.236 END TEST locking_overlapped_coremask_via_rpc 00:08:07.236 ************************************ 00:08:07.236 23:21:28 -- event/cpu_locks.sh@174 -- # cleanup 00:08:07.236 23:21:28 -- event/cpu_locks.sh@15 -- # [[ -z 139928 ]] 00:08:07.236 23:21:28 -- event/cpu_locks.sh@15 -- # killprocess 139928 00:08:07.236 23:21:28 -- common/autotest_common.sh@926 -- # '[' -z 139928 ']' 00:08:07.236 23:21:28 -- common/autotest_common.sh@930 -- # kill -0 139928 00:08:07.236 23:21:28 -- common/autotest_common.sh@931 -- # uname 00:08:07.236 23:21:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:07.236 23:21:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 139928 00:08:07.236 23:21:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:07.236 23:21:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:07.236 23:21:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 139928' 00:08:07.236 killing process with pid 139928 00:08:07.236 23:21:28 -- common/autotest_common.sh@945 -- # kill 139928 00:08:07.236 23:21:28 -- common/autotest_common.sh@950 -- # wait 139928 00:08:07.803 23:21:28 -- event/cpu_locks.sh@16 -- # [[ -z 140159 ]] 00:08:07.803 23:21:28 -- event/cpu_locks.sh@16 -- # killprocess 140159 00:08:07.803 23:21:28 -- common/autotest_common.sh@926 -- # '[' -z 140159 ']' 00:08:07.803 23:21:28 -- common/autotest_common.sh@930 -- # kill -0 140159 00:08:07.803 23:21:28 -- common/autotest_common.sh@931 -- # uname 00:08:07.803 23:21:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:07.803 23:21:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140159 00:08:07.803 23:21:28 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:08:07.803 23:21:28 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:08:07.803 23:21:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140159' 00:08:07.803 killing process with pid 140159 00:08:07.803 23:21:28 -- common/autotest_common.sh@945 -- # kill 140159 00:08:07.803 23:21:28 -- common/autotest_common.sh@950 -- # wait 140159 00:08:08.062 23:21:28 -- event/cpu_locks.sh@18 -- # rm -f 00:08:08.062 23:21:28 -- event/cpu_locks.sh@1 -- # cleanup 00:08:08.062 23:21:28 -- event/cpu_locks.sh@15 -- # [[ -z 139928 ]] 00:08:08.062 23:21:28 -- event/cpu_locks.sh@15 -- # killprocess 139928 00:08:08.062 23:21:28 -- common/autotest_common.sh@926 -- # '[' -z 139928 ']' 00:08:08.062 23:21:28 -- common/autotest_common.sh@930 -- # kill -0 139928 00:08:08.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (139928) - No such process 00:08:08.062 23:21:28 -- common/autotest_common.sh@953 -- # echo 'Process with pid 139928 is not found' 00:08:08.062 Process with pid 139928 is not found 00:08:08.062 23:21:28 -- event/cpu_locks.sh@16 -- # [[ -z 140159 ]] 00:08:08.062 23:21:28 -- event/cpu_locks.sh@16 -- # killprocess 140159 00:08:08.062 23:21:28 -- common/autotest_common.sh@926 -- # '[' -z 140159 ']' 00:08:08.062 23:21:28 -- common/autotest_common.sh@930 -- # kill -0 140159 00:08:08.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (140159) - No such process 00:08:08.062 23:21:28 -- common/autotest_common.sh@953 -- # echo 'Process with pid 140159 is not found' 00:08:08.062 Process with pid 140159 is not found 00:08:08.062 23:21:28 -- event/cpu_locks.sh@18 -- # rm -f 00:08:08.062 00:08:08.062 real 0m22.682s 00:08:08.062 user 0m42.230s 00:08:08.062 sys 0m6.534s 00:08:08.062 23:21:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.062 23:21:28 -- common/autotest_common.sh@10 -- # set +x 00:08:08.062 ************************************ 00:08:08.062 END TEST cpu_locks 00:08:08.062 ************************************ 00:08:08.062 00:08:08.062 real 0m53.593s 00:08:08.062 user 1m47.070s 00:08:08.062 sys 0m11.863s 00:08:08.062 23:21:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.062 23:21:29 -- common/autotest_common.sh@10 -- # set +x 00:08:08.062 ************************************ 00:08:08.062 END TEST event 00:08:08.062 ************************************ 00:08:08.320 23:21:29 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:08.320 23:21:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:08.320 23:21:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.320 23:21:29 -- common/autotest_common.sh@10 -- # set +x 00:08:08.321 ************************************ 00:08:08.321 START TEST thread 00:08:08.321 ************************************ 00:08:08.321 23:21:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:08.321 * Looking for test storage... 00:08:08.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:08.321 23:21:29 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:08.321 23:21:29 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:08.321 23:21:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.321 23:21:29 -- common/autotest_common.sh@10 -- # set +x 00:08:08.321 ************************************ 00:08:08.321 START TEST thread_poller_perf 00:08:08.321 ************************************ 00:08:08.321 23:21:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:08.321 [2024-07-11 23:21:29.113618] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:08.321 [2024-07-11 23:21:29.113776] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140561 ] 00:08:08.321 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.321 [2024-07-11 23:21:29.214099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.579 [2024-07-11 23:21:29.305754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.579 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:09.514 ====================================== 00:08:09.514 busy:2713941828 (cyc) 00:08:09.514 total_run_count: 279000 00:08:09.514 tsc_hz: 2700000000 (cyc) 00:08:09.514 ====================================== 00:08:09.514 poller_cost: 9727 (cyc), 3602 (nsec) 00:08:09.514 00:08:09.514 real 0m1.307s 00:08:09.514 user 0m1.184s 00:08:09.514 sys 0m0.115s 00:08:09.514 23:21:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.514 23:21:30 -- common/autotest_common.sh@10 -- # set +x 00:08:09.514 ************************************ 00:08:09.514 END TEST thread_poller_perf 00:08:09.514 ************************************ 00:08:09.514 23:21:30 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:09.514 23:21:30 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:09.514 23:21:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.514 23:21:30 -- common/autotest_common.sh@10 -- # set +x 00:08:09.514 ************************************ 00:08:09.514 START TEST thread_poller_perf 00:08:09.514 ************************************ 00:08:09.514 23:21:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:09.514 [2024-07-11 23:21:30.443036] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:09.514 [2024-07-11 23:21:30.443126] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140717 ] 00:08:09.772 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.772 [2024-07-11 23:21:30.509575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.772 [2024-07-11 23:21:30.602793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.772 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:11.146 ====================================== 00:08:11.146 busy:2703649484 (cyc) 00:08:11.146 total_run_count: 3820000 00:08:11.146 tsc_hz: 2700000000 (cyc) 00:08:11.146 ====================================== 00:08:11.146 poller_cost: 707 (cyc), 261 (nsec) 00:08:11.146 00:08:11.146 real 0m1.256s 00:08:11.146 user 0m1.164s 00:08:11.146 sys 0m0.085s 00:08:11.146 23:21:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.146 23:21:31 -- common/autotest_common.sh@10 -- # set +x 00:08:11.146 ************************************ 00:08:11.146 END TEST thread_poller_perf 00:08:11.146 ************************************ 00:08:11.146 23:21:31 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:11.146 00:08:11.146 real 0m2.680s 00:08:11.146 user 0m2.400s 00:08:11.146 sys 0m0.281s 00:08:11.146 23:21:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.146 23:21:31 -- common/autotest_common.sh@10 -- # set +x 00:08:11.146 ************************************ 00:08:11.146 END TEST thread 00:08:11.146 ************************************ 00:08:11.146 23:21:31 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:08:11.146 23:21:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:11.146 23:21:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.146 23:21:31 -- common/autotest_common.sh@10 -- # set +x 00:08:11.146 ************************************ 00:08:11.146 START TEST accel 00:08:11.146 ************************************ 00:08:11.146 23:21:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:08:11.146 * Looking for test storage... 00:08:11.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:11.146 23:21:31 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:08:11.146 23:21:31 -- accel/accel.sh@74 -- # get_expected_opcs 00:08:11.146 23:21:31 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:11.146 23:21:31 -- accel/accel.sh@59 -- # spdk_tgt_pid=140913 00:08:11.146 23:21:31 -- accel/accel.sh@60 -- # waitforlisten 140913 00:08:11.146 23:21:31 -- common/autotest_common.sh@819 -- # '[' -z 140913 ']' 00:08:11.146 23:21:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.146 23:21:31 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:11.146 23:21:31 -- accel/accel.sh@58 -- # build_accel_config 00:08:11.146 23:21:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:11.146 23:21:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:11.146 23:21:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.146 23:21:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.146 23:21:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:11.146 23:21:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.146 23:21:31 -- common/autotest_common.sh@10 -- # set +x 00:08:11.147 23:21:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:11.147 23:21:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:11.147 23:21:31 -- accel/accel.sh@41 -- # local IFS=, 00:08:11.147 23:21:31 -- accel/accel.sh@42 -- # jq -r . 00:08:11.147 [2024-07-11 23:21:31.871872] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:11.147 [2024-07-11 23:21:31.872043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140913 ] 00:08:11.147 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.147 [2024-07-11 23:21:31.959477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.147 [2024-07-11 23:21:32.052638] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:11.147 [2024-07-11 23:21:32.052824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.082 23:21:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:12.082 23:21:32 -- common/autotest_common.sh@852 -- # return 0 00:08:12.082 23:21:32 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:12.082 23:21:32 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:08:12.082 23:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.082 23:21:32 -- common/autotest_common.sh@10 -- # set +x 00:08:12.082 23:21:32 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:12.082 23:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.082 23:21:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # IFS== 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # read -r opc module 00:08:12.082 23:21:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:12.082 23:21:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # IFS== 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # read -r opc module 00:08:12.082 23:21:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:12.082 23:21:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # IFS== 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # read -r opc module 00:08:12.082 23:21:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:12.082 23:21:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # IFS== 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # read -r opc module 00:08:12.082 23:21:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:12.082 23:21:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # IFS== 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # read -r opc module 00:08:12.082 23:21:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:12.082 23:21:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # IFS== 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # read -r opc module 00:08:12.082 23:21:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:12.082 23:21:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # IFS== 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # read -r opc module 00:08:12.082 23:21:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:12.082 23:21:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # IFS== 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # read -r opc module 00:08:12.082 23:21:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:12.082 23:21:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # IFS== 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # read -r opc module 00:08:12.082 23:21:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:12.082 23:21:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # IFS== 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # read -r opc module 00:08:12.082 23:21:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:12.082 23:21:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # IFS== 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # read -r opc module 00:08:12.082 23:21:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:12.082 23:21:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # IFS== 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # read -r opc module 00:08:12.082 23:21:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:12.082 23:21:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # IFS== 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # read -r opc module 00:08:12.082 23:21:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:12.082 23:21:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # IFS== 00:08:12.082 23:21:32 -- accel/accel.sh@64 -- # read -r opc module 00:08:12.082 23:21:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:12.082 23:21:32 -- accel/accel.sh@67 -- # killprocess 140913 00:08:12.082 23:21:32 -- common/autotest_common.sh@926 -- # '[' -z 140913 ']' 00:08:12.082 23:21:32 -- common/autotest_common.sh@930 -- # kill -0 140913 00:08:12.082 23:21:32 -- common/autotest_common.sh@931 -- # uname 00:08:12.082 23:21:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:12.082 23:21:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140913 00:08:12.082 23:21:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:12.082 23:21:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:12.082 23:21:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140913' 00:08:12.082 killing process with pid 140913 00:08:12.082 23:21:33 -- common/autotest_common.sh@945 -- # kill 140913 00:08:12.082 23:21:33 -- common/autotest_common.sh@950 -- # wait 140913 00:08:12.647 23:21:33 -- accel/accel.sh@68 -- # trap - ERR 00:08:12.647 23:21:33 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:08:12.647 23:21:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:12.647 23:21:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.647 23:21:33 -- common/autotest_common.sh@10 -- # set +x 00:08:12.647 23:21:33 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:08:12.647 23:21:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:12.647 23:21:33 -- accel/accel.sh@12 -- # build_accel_config 00:08:12.647 23:21:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:12.647 23:21:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.647 23:21:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.647 23:21:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:12.647 23:21:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:12.647 23:21:33 -- accel/accel.sh@41 -- # local IFS=, 00:08:12.647 23:21:33 -- accel/accel.sh@42 -- # jq -r . 00:08:12.647 23:21:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.647 23:21:33 -- common/autotest_common.sh@10 -- # set +x 00:08:12.647 23:21:33 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:12.647 23:21:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:12.647 23:21:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.647 23:21:33 -- common/autotest_common.sh@10 -- # set +x 00:08:12.647 ************************************ 00:08:12.647 START TEST accel_missing_filename 00:08:12.647 ************************************ 00:08:12.647 23:21:33 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:08:12.647 23:21:33 -- common/autotest_common.sh@640 -- # local es=0 00:08:12.647 23:21:33 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:12.647 23:21:33 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:12.647 23:21:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:12.647 23:21:33 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:12.647 23:21:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:12.647 23:21:33 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:08:12.647 23:21:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:12.647 23:21:33 -- accel/accel.sh@12 -- # build_accel_config 00:08:12.647 23:21:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:12.647 23:21:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.647 23:21:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.647 23:21:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:12.647 23:21:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:12.647 23:21:33 -- accel/accel.sh@41 -- # local IFS=, 00:08:12.647 23:21:33 -- accel/accel.sh@42 -- # jq -r . 00:08:12.647 [2024-07-11 23:21:33.540212] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:12.647 [2024-07-11 23:21:33.540318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141211 ] 00:08:12.647 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.904 [2024-07-11 23:21:33.616111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.904 [2024-07-11 23:21:33.709426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.904 [2024-07-11 23:21:33.772720] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:12.904 [2024-07-11 23:21:33.853888] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:08:13.162 A filename is required. 00:08:13.162 23:21:33 -- common/autotest_common.sh@643 -- # es=234 00:08:13.162 23:21:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:13.162 23:21:33 -- common/autotest_common.sh@652 -- # es=106 00:08:13.162 23:21:33 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:13.162 23:21:33 -- common/autotest_common.sh@660 -- # es=1 00:08:13.162 23:21:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:13.162 00:08:13.162 real 0m0.418s 00:08:13.162 user 0m0.372s 00:08:13.162 sys 0m0.161s 00:08:13.162 23:21:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.162 23:21:33 -- common/autotest_common.sh@10 -- # set +x 00:08:13.162 ************************************ 00:08:13.162 END TEST accel_missing_filename 00:08:13.162 ************************************ 00:08:13.162 23:21:33 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:13.162 23:21:33 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:08:13.162 23:21:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.162 23:21:33 -- common/autotest_common.sh@10 -- # set +x 00:08:13.162 ************************************ 00:08:13.162 START TEST accel_compress_verify 00:08:13.162 ************************************ 00:08:13.162 23:21:33 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:13.162 23:21:33 -- common/autotest_common.sh@640 -- # local es=0 00:08:13.162 23:21:33 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:13.162 23:21:33 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:13.162 23:21:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.162 23:21:33 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:13.162 23:21:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.162 23:21:33 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:13.162 23:21:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:13.162 23:21:33 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.162 23:21:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.162 23:21:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.162 23:21:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.162 23:21:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.162 23:21:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.162 23:21:33 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.162 23:21:33 -- accel/accel.sh@42 -- # jq -r . 00:08:13.162 [2024-07-11 23:21:33.990976] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:13.162 [2024-07-11 23:21:33.991071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141241 ] 00:08:13.162 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.162 [2024-07-11 23:21:34.059146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.421 [2024-07-11 23:21:34.153256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.421 [2024-07-11 23:21:34.212467] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.421 [2024-07-11 23:21:34.300299] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:08:13.680 00:08:13.680 Compression does not support the verify option, aborting. 00:08:13.680 23:21:34 -- common/autotest_common.sh@643 -- # es=161 00:08:13.680 23:21:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:13.680 23:21:34 -- common/autotest_common.sh@652 -- # es=33 00:08:13.680 23:21:34 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:13.680 23:21:34 -- common/autotest_common.sh@660 -- # es=1 00:08:13.680 23:21:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:13.680 00:08:13.680 real 0m0.414s 00:08:13.680 user 0m0.297s 00:08:13.680 sys 0m0.153s 00:08:13.680 23:21:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.680 23:21:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.680 ************************************ 00:08:13.680 END TEST accel_compress_verify 00:08:13.680 ************************************ 00:08:13.680 23:21:34 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:13.680 23:21:34 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:13.680 23:21:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.680 23:21:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.680 ************************************ 00:08:13.680 START TEST accel_wrong_workload 00:08:13.680 ************************************ 00:08:13.680 23:21:34 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:08:13.680 23:21:34 -- common/autotest_common.sh@640 -- # local es=0 00:08:13.680 23:21:34 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:13.680 23:21:34 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:13.680 23:21:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.680 23:21:34 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:13.680 23:21:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.680 23:21:34 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:08:13.680 23:21:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:13.680 23:21:34 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.680 23:21:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.680 23:21:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.680 23:21:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.680 23:21:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.680 23:21:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.680 23:21:34 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.680 23:21:34 -- accel/accel.sh@42 -- # jq -r . 00:08:13.680 Unsupported workload type: foobar 00:08:13.680 [2024-07-11 23:21:34.437478] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:13.680 accel_perf options: 00:08:13.680 [-h help message] 00:08:13.680 [-q queue depth per core] 00:08:13.680 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:13.680 [-T number of threads per core 00:08:13.680 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:13.680 [-t time in seconds] 00:08:13.680 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:13.680 [ dif_verify, , dif_generate, dif_generate_copy 00:08:13.680 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:13.680 [-l for compress/decompress workloads, name of uncompressed input file 00:08:13.680 [-S for crc32c workload, use this seed value (default 0) 00:08:13.680 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:13.680 [-f for fill workload, use this BYTE value (default 255) 00:08:13.680 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:13.680 [-y verify result if this switch is on] 00:08:13.680 [-a tasks to allocate per core (default: same value as -q)] 00:08:13.680 Can be used to spread operations across a wider range of memory. 00:08:13.680 23:21:34 -- common/autotest_common.sh@643 -- # es=1 00:08:13.680 23:21:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:13.680 23:21:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:13.680 23:21:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:13.680 00:08:13.680 real 0m0.025s 00:08:13.680 user 0m0.013s 00:08:13.680 sys 0m0.011s 00:08:13.680 23:21:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.680 23:21:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.680 ************************************ 00:08:13.680 END TEST accel_wrong_workload 00:08:13.680 ************************************ 00:08:13.680 Error: writing output failed: Broken pipe 00:08:13.680 23:21:34 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:13.680 23:21:34 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:08:13.680 23:21:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.680 23:21:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.680 ************************************ 00:08:13.680 START TEST accel_negative_buffers 00:08:13.680 ************************************ 00:08:13.680 23:21:34 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:13.680 23:21:34 -- common/autotest_common.sh@640 -- # local es=0 00:08:13.680 23:21:34 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:13.680 23:21:34 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:13.680 23:21:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.680 23:21:34 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:13.680 23:21:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.680 23:21:34 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:08:13.680 23:21:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:13.680 23:21:34 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.680 23:21:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.680 23:21:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.680 23:21:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.680 23:21:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.680 23:21:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.680 23:21:34 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.680 23:21:34 -- accel/accel.sh@42 -- # jq -r . 00:08:13.680 -x option must be non-negative. 00:08:13.680 [2024-07-11 23:21:34.500529] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:13.680 accel_perf options: 00:08:13.680 [-h help message] 00:08:13.680 [-q queue depth per core] 00:08:13.680 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:13.680 [-T number of threads per core 00:08:13.680 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:13.680 [-t time in seconds] 00:08:13.680 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:13.680 [ dif_verify, , dif_generate, dif_generate_copy 00:08:13.680 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:13.680 [-l for compress/decompress workloads, name of uncompressed input file 00:08:13.680 [-S for crc32c workload, use this seed value (default 0) 00:08:13.680 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:13.680 [-f for fill workload, use this BYTE value (default 255) 00:08:13.680 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:13.680 [-y verify result if this switch is on] 00:08:13.680 [-a tasks to allocate per core (default: same value as -q)] 00:08:13.680 Can be used to spread operations across a wider range of memory. 00:08:13.680 23:21:34 -- common/autotest_common.sh@643 -- # es=1 00:08:13.680 23:21:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:13.680 23:21:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:13.680 23:21:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:13.680 00:08:13.680 real 0m0.037s 00:08:13.680 user 0m0.018s 00:08:13.680 sys 0m0.019s 00:08:13.680 23:21:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.680 23:21:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.680 ************************************ 00:08:13.680 END TEST accel_negative_buffers 00:08:13.680 ************************************ 00:08:13.680 23:21:34 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:13.680 23:21:34 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:13.680 23:21:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.680 23:21:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.680 ************************************ 00:08:13.680 START TEST accel_crc32c 00:08:13.680 ************************************ 00:08:13.680 23:21:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:13.714 23:21:34 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.714 23:21:34 -- accel/accel.sh@17 -- # local accel_module 00:08:13.714 23:21:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:13.714 23:21:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:13.714 23:21:34 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.714 23:21:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.714 23:21:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.714 23:21:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.714 23:21:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.714 23:21:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.714 23:21:34 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.714 23:21:34 -- accel/accel.sh@42 -- # jq -r . 00:08:13.714 Error: writing output failed: Broken pipe 00:08:13.714 [2024-07-11 23:21:34.556263] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:13.714 [2024-07-11 23:21:34.556334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141418 ] 00:08:13.714 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.714 [2024-07-11 23:21:34.626892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.973 [2024-07-11 23:21:34.720277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.347 23:21:35 -- accel/accel.sh@18 -- # out=' 00:08:15.347 SPDK Configuration: 00:08:15.347 Core mask: 0x1 00:08:15.347 00:08:15.347 Accel Perf Configuration: 00:08:15.347 Workload Type: crc32c 00:08:15.347 CRC-32C seed: 32 00:08:15.347 Transfer size: 4096 bytes 00:08:15.347 Vector count 1 00:08:15.347 Module: software 00:08:15.347 Queue depth: 32 00:08:15.347 Allocate depth: 32 00:08:15.347 # threads/core: 1 00:08:15.347 Run time: 1 seconds 00:08:15.347 Verify: Yes 00:08:15.347 00:08:15.347 Running for 1 seconds... 00:08:15.347 00:08:15.347 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:15.347 ------------------------------------------------------------------------------------ 00:08:15.347 0,0 406208/s 1586 MiB/s 0 0 00:08:15.347 ==================================================================================== 00:08:15.347 Total 406208/s 1586 MiB/s 0 0' 00:08:15.347 23:21:35 -- accel/accel.sh@20 -- # IFS=: 00:08:15.347 23:21:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:15.347 23:21:35 -- accel/accel.sh@20 -- # read -r var val 00:08:15.347 23:21:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:15.347 23:21:35 -- accel/accel.sh@12 -- # build_accel_config 00:08:15.347 23:21:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:15.347 23:21:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.347 23:21:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.347 23:21:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:15.347 23:21:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:15.347 23:21:35 -- accel/accel.sh@41 -- # local IFS=, 00:08:15.347 23:21:35 -- accel/accel.sh@42 -- # jq -r . 00:08:15.347 [2024-07-11 23:21:35.993041] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:15.347 [2024-07-11 23:21:35.993232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141558 ] 00:08:15.347 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.347 [2024-07-11 23:21:36.086130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.347 [2024-07-11 23:21:36.179479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.347 23:21:36 -- accel/accel.sh@21 -- # val= 00:08:15.347 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.347 23:21:36 -- accel/accel.sh@21 -- # val= 00:08:15.347 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.347 23:21:36 -- accel/accel.sh@21 -- # val=0x1 00:08:15.347 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.347 23:21:36 -- accel/accel.sh@21 -- # val= 00:08:15.347 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.347 23:21:36 -- accel/accel.sh@21 -- # val= 00:08:15.347 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.347 23:21:36 -- accel/accel.sh@21 -- # val=crc32c 00:08:15.347 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.347 23:21:36 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.347 23:21:36 -- accel/accel.sh@21 -- # val=32 00:08:15.347 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.347 23:21:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:15.347 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.347 23:21:36 -- accel/accel.sh@21 -- # val= 00:08:15.347 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.347 23:21:36 -- accel/accel.sh@21 -- # val=software 00:08:15.347 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.347 23:21:36 -- accel/accel.sh@23 -- # accel_module=software 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.347 23:21:36 -- accel/accel.sh@21 -- # val=32 00:08:15.347 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.347 23:21:36 -- accel/accel.sh@21 -- # val=32 00:08:15.347 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.347 23:21:36 -- accel/accel.sh@21 -- # val=1 00:08:15.347 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.347 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.347 23:21:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:15.347 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.348 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.348 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.348 23:21:36 -- accel/accel.sh@21 -- # val=Yes 00:08:15.348 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.348 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.348 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.348 23:21:36 -- accel/accel.sh@21 -- # val= 00:08:15.348 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.348 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.348 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.348 23:21:36 -- accel/accel.sh@21 -- # val= 00:08:15.348 23:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.348 23:21:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.348 23:21:36 -- accel/accel.sh@20 -- # read -r var val 00:08:16.721 23:21:37 -- accel/accel.sh@21 -- # val= 00:08:16.721 23:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.721 23:21:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.721 23:21:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.721 23:21:37 -- accel/accel.sh@21 -- # val= 00:08:16.721 23:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.721 23:21:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.721 23:21:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.721 23:21:37 -- accel/accel.sh@21 -- # val= 00:08:16.721 23:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.721 23:21:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.721 23:21:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.721 23:21:37 -- accel/accel.sh@21 -- # val= 00:08:16.721 23:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.721 23:21:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.721 23:21:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.721 23:21:37 -- accel/accel.sh@21 -- # val= 00:08:16.721 23:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.721 23:21:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.721 23:21:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.721 23:21:37 -- accel/accel.sh@21 -- # val= 00:08:16.721 23:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.721 23:21:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.722 23:21:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.722 23:21:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:16.722 23:21:37 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:08:16.722 23:21:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.722 00:08:16.722 real 0m2.885s 00:08:16.722 user 0m2.610s 00:08:16.722 sys 0m0.338s 00:08:16.722 23:21:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.722 23:21:37 -- common/autotest_common.sh@10 -- # set +x 00:08:16.722 ************************************ 00:08:16.722 END TEST accel_crc32c 00:08:16.722 ************************************ 00:08:16.722 23:21:37 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:16.722 23:21:37 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:16.722 23:21:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.722 23:21:37 -- common/autotest_common.sh@10 -- # set +x 00:08:16.722 ************************************ 00:08:16.722 START TEST accel_crc32c_C2 00:08:16.722 ************************************ 00:08:16.722 23:21:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:16.722 23:21:37 -- accel/accel.sh@16 -- # local accel_opc 00:08:16.722 23:21:37 -- accel/accel.sh@17 -- # local accel_module 00:08:16.722 23:21:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:16.722 23:21:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:16.722 23:21:37 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.722 23:21:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:16.722 23:21:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.722 23:21:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.722 23:21:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:16.722 23:21:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:16.722 23:21:37 -- accel/accel.sh@41 -- # local IFS=, 00:08:16.722 23:21:37 -- accel/accel.sh@42 -- # jq -r . 00:08:16.722 [2024-07-11 23:21:37.486982] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:16.722 [2024-07-11 23:21:37.487162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141722 ] 00:08:16.722 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.722 [2024-07-11 23:21:37.582702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.980 [2024-07-11 23:21:37.676792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.354 23:21:38 -- accel/accel.sh@18 -- # out=' 00:08:18.355 SPDK Configuration: 00:08:18.355 Core mask: 0x1 00:08:18.355 00:08:18.355 Accel Perf Configuration: 00:08:18.355 Workload Type: crc32c 00:08:18.355 CRC-32C seed: 0 00:08:18.355 Transfer size: 4096 bytes 00:08:18.355 Vector count 2 00:08:18.355 Module: software 00:08:18.355 Queue depth: 32 00:08:18.355 Allocate depth: 32 00:08:18.355 # threads/core: 1 00:08:18.355 Run time: 1 seconds 00:08:18.355 Verify: Yes 00:08:18.355 00:08:18.355 Running for 1 seconds... 00:08:18.355 00:08:18.355 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:18.355 ------------------------------------------------------------------------------------ 00:08:18.355 0,0 314304/s 2455 MiB/s 0 0 00:08:18.355 ==================================================================================== 00:08:18.355 Total 314304/s 1227 MiB/s 0 0' 00:08:18.355 23:21:38 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:18.355 23:21:38 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:18.355 23:21:38 -- accel/accel.sh@12 -- # build_accel_config 00:08:18.355 23:21:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:18.355 23:21:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.355 23:21:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.355 23:21:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:18.355 23:21:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:18.355 23:21:38 -- accel/accel.sh@41 -- # local IFS=, 00:08:18.355 23:21:38 -- accel/accel.sh@42 -- # jq -r . 00:08:18.355 [2024-07-11 23:21:38.936776] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:18.355 [2024-07-11 23:21:38.936944] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141984 ] 00:08:18.355 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.355 [2024-07-11 23:21:39.030046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.355 [2024-07-11 23:21:39.123324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val= 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val= 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val=0x1 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val= 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val= 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val=crc32c 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val=0 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val= 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val=software 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@23 -- # accel_module=software 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val=32 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val=32 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val=1 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val=Yes 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val= 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.355 23:21:39 -- accel/accel.sh@21 -- # val= 00:08:18.355 23:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.355 23:21:39 -- accel/accel.sh@20 -- # read -r var val 00:08:19.730 23:21:40 -- accel/accel.sh@21 -- # val= 00:08:19.730 23:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.730 23:21:40 -- accel/accel.sh@20 -- # IFS=: 00:08:19.730 23:21:40 -- accel/accel.sh@20 -- # read -r var val 00:08:19.730 23:21:40 -- accel/accel.sh@21 -- # val= 00:08:19.730 23:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.730 23:21:40 -- accel/accel.sh@20 -- # IFS=: 00:08:19.730 23:21:40 -- accel/accel.sh@20 -- # read -r var val 00:08:19.730 23:21:40 -- accel/accel.sh@21 -- # val= 00:08:19.730 23:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.730 23:21:40 -- accel/accel.sh@20 -- # IFS=: 00:08:19.730 23:21:40 -- accel/accel.sh@20 -- # read -r var val 00:08:19.730 23:21:40 -- accel/accel.sh@21 -- # val= 00:08:19.730 23:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.730 23:21:40 -- accel/accel.sh@20 -- # IFS=: 00:08:19.730 23:21:40 -- accel/accel.sh@20 -- # read -r var val 00:08:19.730 23:21:40 -- accel/accel.sh@21 -- # val= 00:08:19.730 23:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.730 23:21:40 -- accel/accel.sh@20 -- # IFS=: 00:08:19.730 23:21:40 -- accel/accel.sh@20 -- # read -r var val 00:08:19.730 23:21:40 -- accel/accel.sh@21 -- # val= 00:08:19.730 23:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.730 23:21:40 -- accel/accel.sh@20 -- # IFS=: 00:08:19.730 23:21:40 -- accel/accel.sh@20 -- # read -r var val 00:08:19.730 23:21:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:19.730 23:21:40 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:08:19.730 23:21:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.730 00:08:19.730 real 0m2.912s 00:08:19.730 user 0m2.570s 00:08:19.730 sys 0m0.333s 00:08:19.730 23:21:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.730 23:21:40 -- common/autotest_common.sh@10 -- # set +x 00:08:19.730 ************************************ 00:08:19.730 END TEST accel_crc32c_C2 00:08:19.730 ************************************ 00:08:19.730 23:21:40 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:19.730 23:21:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:19.730 23:21:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.730 23:21:40 -- common/autotest_common.sh@10 -- # set +x 00:08:19.730 ************************************ 00:08:19.730 START TEST accel_copy 00:08:19.730 ************************************ 00:08:19.730 23:21:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:08:19.730 23:21:40 -- accel/accel.sh@16 -- # local accel_opc 00:08:19.730 23:21:40 -- accel/accel.sh@17 -- # local accel_module 00:08:19.730 23:21:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:08:19.730 23:21:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:19.730 23:21:40 -- accel/accel.sh@12 -- # build_accel_config 00:08:19.730 23:21:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:19.730 23:21:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.730 23:21:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.730 23:21:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:19.730 23:21:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:19.730 23:21:40 -- accel/accel.sh@41 -- # local IFS=, 00:08:19.730 23:21:40 -- accel/accel.sh@42 -- # jq -r . 00:08:19.730 [2024-07-11 23:21:40.420133] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:19.730 [2024-07-11 23:21:40.420249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142146 ] 00:08:19.730 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.730 [2024-07-11 23:21:40.503628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.730 [2024-07-11 23:21:40.594458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.130 23:21:41 -- accel/accel.sh@18 -- # out=' 00:08:21.130 SPDK Configuration: 00:08:21.130 Core mask: 0x1 00:08:21.131 00:08:21.131 Accel Perf Configuration: 00:08:21.131 Workload Type: copy 00:08:21.131 Transfer size: 4096 bytes 00:08:21.131 Vector count 1 00:08:21.131 Module: software 00:08:21.131 Queue depth: 32 00:08:21.131 Allocate depth: 32 00:08:21.131 # threads/core: 1 00:08:21.131 Run time: 1 seconds 00:08:21.131 Verify: Yes 00:08:21.131 00:08:21.131 Running for 1 seconds... 00:08:21.131 00:08:21.131 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:21.131 ------------------------------------------------------------------------------------ 00:08:21.131 0,0 278016/s 1086 MiB/s 0 0 00:08:21.131 ==================================================================================== 00:08:21.131 Total 278016/s 1086 MiB/s 0 0' 00:08:21.131 23:21:41 -- accel/accel.sh@20 -- # IFS=: 00:08:21.131 23:21:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:21.131 23:21:41 -- accel/accel.sh@20 -- # read -r var val 00:08:21.131 23:21:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:21.131 23:21:41 -- accel/accel.sh@12 -- # build_accel_config 00:08:21.131 23:21:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:21.131 23:21:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.131 23:21:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.131 23:21:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:21.131 23:21:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:21.131 23:21:41 -- accel/accel.sh@41 -- # local IFS=, 00:08:21.131 23:21:41 -- accel/accel.sh@42 -- # jq -r . 00:08:21.131 [2024-07-11 23:21:41.867773] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:21.131 [2024-07-11 23:21:41.867941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142288 ] 00:08:21.131 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.131 [2024-07-11 23:21:41.963146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.131 [2024-07-11 23:21:42.056446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.389 23:21:42 -- accel/accel.sh@21 -- # val= 00:08:21.389 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.389 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.389 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.389 23:21:42 -- accel/accel.sh@21 -- # val= 00:08:21.389 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.389 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.389 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.389 23:21:42 -- accel/accel.sh@21 -- # val=0x1 00:08:21.389 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.389 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.389 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.389 23:21:42 -- accel/accel.sh@21 -- # val= 00:08:21.389 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.389 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.389 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.389 23:21:42 -- accel/accel.sh@21 -- # val= 00:08:21.389 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.389 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.389 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.389 23:21:42 -- accel/accel.sh@21 -- # val=copy 00:08:21.389 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.389 23:21:42 -- accel/accel.sh@24 -- # accel_opc=copy 00:08:21.389 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.390 23:21:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:21.390 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.390 23:21:42 -- accel/accel.sh@21 -- # val= 00:08:21.390 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.390 23:21:42 -- accel/accel.sh@21 -- # val=software 00:08:21.390 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.390 23:21:42 -- accel/accel.sh@23 -- # accel_module=software 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.390 23:21:42 -- accel/accel.sh@21 -- # val=32 00:08:21.390 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.390 23:21:42 -- accel/accel.sh@21 -- # val=32 00:08:21.390 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.390 23:21:42 -- accel/accel.sh@21 -- # val=1 00:08:21.390 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.390 23:21:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:21.390 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.390 23:21:42 -- accel/accel.sh@21 -- # val=Yes 00:08:21.390 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.390 23:21:42 -- accel/accel.sh@21 -- # val= 00:08:21.390 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:21.390 23:21:42 -- accel/accel.sh@21 -- # val= 00:08:21.390 23:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # IFS=: 00:08:21.390 23:21:42 -- accel/accel.sh@20 -- # read -r var val 00:08:22.763 23:21:43 -- accel/accel.sh@21 -- # val= 00:08:22.763 23:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.763 23:21:43 -- accel/accel.sh@20 -- # IFS=: 00:08:22.763 23:21:43 -- accel/accel.sh@20 -- # read -r var val 00:08:22.763 23:21:43 -- accel/accel.sh@21 -- # val= 00:08:22.763 23:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.763 23:21:43 -- accel/accel.sh@20 -- # IFS=: 00:08:22.763 23:21:43 -- accel/accel.sh@20 -- # read -r var val 00:08:22.763 23:21:43 -- accel/accel.sh@21 -- # val= 00:08:22.763 23:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.763 23:21:43 -- accel/accel.sh@20 -- # IFS=: 00:08:22.763 23:21:43 -- accel/accel.sh@20 -- # read -r var val 00:08:22.763 23:21:43 -- accel/accel.sh@21 -- # val= 00:08:22.763 23:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.763 23:21:43 -- accel/accel.sh@20 -- # IFS=: 00:08:22.763 23:21:43 -- accel/accel.sh@20 -- # read -r var val 00:08:22.763 23:21:43 -- accel/accel.sh@21 -- # val= 00:08:22.763 23:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.763 23:21:43 -- accel/accel.sh@20 -- # IFS=: 00:08:22.763 23:21:43 -- accel/accel.sh@20 -- # read -r var val 00:08:22.763 23:21:43 -- accel/accel.sh@21 -- # val= 00:08:22.763 23:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.763 23:21:43 -- accel/accel.sh@20 -- # IFS=: 00:08:22.763 23:21:43 -- accel/accel.sh@20 -- # read -r var val 00:08:22.763 23:21:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:22.763 23:21:43 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:08:22.763 23:21:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.763 00:08:22.763 real 0m2.902s 00:08:22.763 user 0m2.559s 00:08:22.763 sys 0m0.334s 00:08:22.763 23:21:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.763 23:21:43 -- common/autotest_common.sh@10 -- # set +x 00:08:22.763 ************************************ 00:08:22.763 END TEST accel_copy 00:08:22.763 ************************************ 00:08:22.763 23:21:43 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:22.763 23:21:43 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:22.763 23:21:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.763 23:21:43 -- common/autotest_common.sh@10 -- # set +x 00:08:22.763 ************************************ 00:08:22.763 START TEST accel_fill 00:08:22.763 ************************************ 00:08:22.763 23:21:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:22.763 23:21:43 -- accel/accel.sh@16 -- # local accel_opc 00:08:22.763 23:21:43 -- accel/accel.sh@17 -- # local accel_module 00:08:22.763 23:21:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:22.763 23:21:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:22.763 23:21:43 -- accel/accel.sh@12 -- # build_accel_config 00:08:22.763 23:21:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:22.763 23:21:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.763 23:21:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.763 23:21:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:22.763 23:21:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:22.763 23:21:43 -- accel/accel.sh@41 -- # local IFS=, 00:08:22.763 23:21:43 -- accel/accel.sh@42 -- # jq -r . 00:08:22.763 [2024-07-11 23:21:43.352792] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:22.763 [2024-07-11 23:21:43.352880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142482 ] 00:08:22.763 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.763 [2024-07-11 23:21:43.419881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.763 [2024-07-11 23:21:43.512711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.139 23:21:44 -- accel/accel.sh@18 -- # out=' 00:08:24.139 SPDK Configuration: 00:08:24.139 Core mask: 0x1 00:08:24.139 00:08:24.139 Accel Perf Configuration: 00:08:24.139 Workload Type: fill 00:08:24.139 Fill pattern: 0x80 00:08:24.139 Transfer size: 4096 bytes 00:08:24.139 Vector count 1 00:08:24.139 Module: software 00:08:24.139 Queue depth: 64 00:08:24.139 Allocate depth: 64 00:08:24.139 # threads/core: 1 00:08:24.139 Run time: 1 seconds 00:08:24.139 Verify: Yes 00:08:24.139 00:08:24.139 Running for 1 seconds... 00:08:24.139 00:08:24.139 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:24.139 ------------------------------------------------------------------------------------ 00:08:24.139 0,0 405056/s 1582 MiB/s 0 0 00:08:24.139 ==================================================================================== 00:08:24.139 Total 405056/s 1582 MiB/s 0 0' 00:08:24.139 23:21:44 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:24.139 23:21:44 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:24.139 23:21:44 -- accel/accel.sh@12 -- # build_accel_config 00:08:24.139 23:21:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:24.139 23:21:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.139 23:21:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.139 23:21:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:24.139 23:21:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:24.139 23:21:44 -- accel/accel.sh@41 -- # local IFS=, 00:08:24.139 23:21:44 -- accel/accel.sh@42 -- # jq -r . 00:08:24.139 [2024-07-11 23:21:44.783367] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:24.139 [2024-07-11 23:21:44.783500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142706 ] 00:08:24.139 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.139 [2024-07-11 23:21:44.878369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.139 [2024-07-11 23:21:44.973131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val= 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val= 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val=0x1 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val= 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val= 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val=fill 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@24 -- # accel_opc=fill 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val=0x80 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val= 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val=software 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@23 -- # accel_module=software 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val=64 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val=64 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val=1 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val=Yes 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val= 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:24.139 23:21:45 -- accel/accel.sh@21 -- # val= 00:08:24.139 23:21:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # IFS=: 00:08:24.139 23:21:45 -- accel/accel.sh@20 -- # read -r var val 00:08:25.511 23:21:46 -- accel/accel.sh@21 -- # val= 00:08:25.511 23:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.511 23:21:46 -- accel/accel.sh@20 -- # IFS=: 00:08:25.511 23:21:46 -- accel/accel.sh@20 -- # read -r var val 00:08:25.511 23:21:46 -- accel/accel.sh@21 -- # val= 00:08:25.511 23:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.511 23:21:46 -- accel/accel.sh@20 -- # IFS=: 00:08:25.511 23:21:46 -- accel/accel.sh@20 -- # read -r var val 00:08:25.511 23:21:46 -- accel/accel.sh@21 -- # val= 00:08:25.511 23:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.511 23:21:46 -- accel/accel.sh@20 -- # IFS=: 00:08:25.511 23:21:46 -- accel/accel.sh@20 -- # read -r var val 00:08:25.511 23:21:46 -- accel/accel.sh@21 -- # val= 00:08:25.511 23:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.511 23:21:46 -- accel/accel.sh@20 -- # IFS=: 00:08:25.511 23:21:46 -- accel/accel.sh@20 -- # read -r var val 00:08:25.511 23:21:46 -- accel/accel.sh@21 -- # val= 00:08:25.511 23:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.511 23:21:46 -- accel/accel.sh@20 -- # IFS=: 00:08:25.511 23:21:46 -- accel/accel.sh@20 -- # read -r var val 00:08:25.511 23:21:46 -- accel/accel.sh@21 -- # val= 00:08:25.511 23:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.511 23:21:46 -- accel/accel.sh@20 -- # IFS=: 00:08:25.511 23:21:46 -- accel/accel.sh@20 -- # read -r var val 00:08:25.511 23:21:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:25.511 23:21:46 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:08:25.511 23:21:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.511 00:08:25.511 real 0m2.880s 00:08:25.511 user 0m2.543s 00:08:25.511 sys 0m0.328s 00:08:25.511 23:21:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.511 23:21:46 -- common/autotest_common.sh@10 -- # set +x 00:08:25.511 ************************************ 00:08:25.511 END TEST accel_fill 00:08:25.511 ************************************ 00:08:25.511 23:21:46 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:25.511 23:21:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:25.511 23:21:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.511 23:21:46 -- common/autotest_common.sh@10 -- # set +x 00:08:25.511 ************************************ 00:08:25.511 START TEST accel_copy_crc32c 00:08:25.511 ************************************ 00:08:25.511 23:21:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:08:25.511 23:21:46 -- accel/accel.sh@16 -- # local accel_opc 00:08:25.511 23:21:46 -- accel/accel.sh@17 -- # local accel_module 00:08:25.511 23:21:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:25.511 23:21:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:25.511 23:21:46 -- accel/accel.sh@12 -- # build_accel_config 00:08:25.511 23:21:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:25.511 23:21:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.511 23:21:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.511 23:21:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:25.511 23:21:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:25.511 23:21:46 -- accel/accel.sh@41 -- # local IFS=, 00:08:25.511 23:21:46 -- accel/accel.sh@42 -- # jq -r . 00:08:25.511 [2024-07-11 23:21:46.266103] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:25.511 [2024-07-11 23:21:46.266225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142874 ] 00:08:25.511 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.511 [2024-07-11 23:21:46.344063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.511 [2024-07-11 23:21:46.437003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.881 23:21:47 -- accel/accel.sh@18 -- # out=' 00:08:26.881 SPDK Configuration: 00:08:26.881 Core mask: 0x1 00:08:26.881 00:08:26.881 Accel Perf Configuration: 00:08:26.881 Workload Type: copy_crc32c 00:08:26.881 CRC-32C seed: 0 00:08:26.881 Vector size: 4096 bytes 00:08:26.881 Transfer size: 4096 bytes 00:08:26.881 Vector count 1 00:08:26.881 Module: software 00:08:26.881 Queue depth: 32 00:08:26.881 Allocate depth: 32 00:08:26.881 # threads/core: 1 00:08:26.881 Run time: 1 seconds 00:08:26.881 Verify: Yes 00:08:26.881 00:08:26.881 Running for 1 seconds... 00:08:26.881 00:08:26.881 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:26.881 ------------------------------------------------------------------------------------ 00:08:26.881 0,0 218272/s 852 MiB/s 0 0 00:08:26.881 ==================================================================================== 00:08:26.881 Total 218272/s 852 MiB/s 0 0' 00:08:26.881 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:26.881 23:21:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:26.881 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:26.881 23:21:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:26.881 23:21:47 -- accel/accel.sh@12 -- # build_accel_config 00:08:26.881 23:21:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:26.881 23:21:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.881 23:21:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.881 23:21:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:26.881 23:21:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:26.881 23:21:47 -- accel/accel.sh@41 -- # local IFS=, 00:08:26.881 23:21:47 -- accel/accel.sh@42 -- # jq -r . 00:08:26.881 [2024-07-11 23:21:47.709472] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:26.881 [2024-07-11 23:21:47.709632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143015 ] 00:08:26.881 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.881 [2024-07-11 23:21:47.804706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.139 [2024-07-11 23:21:47.898504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val= 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val= 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val=0x1 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val= 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val= 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val=0 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val= 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val=software 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@23 -- # accel_module=software 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val=32 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val=32 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val=1 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val=Yes 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val= 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:27.139 23:21:47 -- accel/accel.sh@21 -- # val= 00:08:27.139 23:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # IFS=: 00:08:27.139 23:21:47 -- accel/accel.sh@20 -- # read -r var val 00:08:28.512 23:21:49 -- accel/accel.sh@21 -- # val= 00:08:28.512 23:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.512 23:21:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.512 23:21:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.512 23:21:49 -- accel/accel.sh@21 -- # val= 00:08:28.512 23:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.512 23:21:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.512 23:21:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.512 23:21:49 -- accel/accel.sh@21 -- # val= 00:08:28.512 23:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.512 23:21:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.512 23:21:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.512 23:21:49 -- accel/accel.sh@21 -- # val= 00:08:28.512 23:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.512 23:21:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.512 23:21:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.512 23:21:49 -- accel/accel.sh@21 -- # val= 00:08:28.512 23:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.512 23:21:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.512 23:21:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.512 23:21:49 -- accel/accel.sh@21 -- # val= 00:08:28.512 23:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.512 23:21:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.512 23:21:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.512 23:21:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:28.512 23:21:49 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:28.512 23:21:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.512 00:08:28.512 real 0m2.895s 00:08:28.512 user 0m2.564s 00:08:28.512 sys 0m0.321s 00:08:28.512 23:21:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.512 23:21:49 -- common/autotest_common.sh@10 -- # set +x 00:08:28.512 ************************************ 00:08:28.512 END TEST accel_copy_crc32c 00:08:28.512 ************************************ 00:08:28.512 23:21:49 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:28.512 23:21:49 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:28.512 23:21:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.512 23:21:49 -- common/autotest_common.sh@10 -- # set +x 00:08:28.512 ************************************ 00:08:28.512 START TEST accel_copy_crc32c_C2 00:08:28.512 ************************************ 00:08:28.512 23:21:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:28.512 23:21:49 -- accel/accel.sh@16 -- # local accel_opc 00:08:28.512 23:21:49 -- accel/accel.sh@17 -- # local accel_module 00:08:28.512 23:21:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:28.512 23:21:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:28.512 23:21:49 -- accel/accel.sh@12 -- # build_accel_config 00:08:28.512 23:21:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:28.512 23:21:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.512 23:21:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.512 23:21:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:28.512 23:21:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:28.512 23:21:49 -- accel/accel.sh@41 -- # local IFS=, 00:08:28.512 23:21:49 -- accel/accel.sh@42 -- # jq -r . 00:08:28.512 [2024-07-11 23:21:49.201296] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:28.512 [2024-07-11 23:21:49.201380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143256 ] 00:08:28.512 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.512 [2024-07-11 23:21:49.294181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.512 [2024-07-11 23:21:49.387771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.886 23:21:50 -- accel/accel.sh@18 -- # out=' 00:08:29.886 SPDK Configuration: 00:08:29.886 Core mask: 0x1 00:08:29.886 00:08:29.886 Accel Perf Configuration: 00:08:29.886 Workload Type: copy_crc32c 00:08:29.886 CRC-32C seed: 0 00:08:29.886 Vector size: 4096 bytes 00:08:29.886 Transfer size: 8192 bytes 00:08:29.886 Vector count 2 00:08:29.886 Module: software 00:08:29.886 Queue depth: 32 00:08:29.886 Allocate depth: 32 00:08:29.886 # threads/core: 1 00:08:29.886 Run time: 1 seconds 00:08:29.886 Verify: Yes 00:08:29.886 00:08:29.886 Running for 1 seconds... 00:08:29.886 00:08:29.886 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:29.886 ------------------------------------------------------------------------------------ 00:08:29.886 0,0 153024/s 1195 MiB/s 0 0 00:08:29.886 ==================================================================================== 00:08:29.886 Total 153024/s 597 MiB/s 0 0' 00:08:29.886 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:29.886 23:21:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:29.886 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:29.886 23:21:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:29.886 23:21:50 -- accel/accel.sh@12 -- # build_accel_config 00:08:29.886 23:21:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:29.886 23:21:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.886 23:21:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.886 23:21:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:29.886 23:21:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:29.887 23:21:50 -- accel/accel.sh@41 -- # local IFS=, 00:08:29.887 23:21:50 -- accel/accel.sh@42 -- # jq -r . 00:08:29.887 [2024-07-11 23:21:50.660856] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:29.887 [2024-07-11 23:21:50.661031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143433 ] 00:08:29.887 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.887 [2024-07-11 23:21:50.756117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.144 [2024-07-11 23:21:50.850231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.144 23:21:50 -- accel/accel.sh@21 -- # val= 00:08:30.144 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.144 23:21:50 -- accel/accel.sh@21 -- # val= 00:08:30.144 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.144 23:21:50 -- accel/accel.sh@21 -- # val=0x1 00:08:30.144 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.144 23:21:50 -- accel/accel.sh@21 -- # val= 00:08:30.144 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.144 23:21:50 -- accel/accel.sh@21 -- # val= 00:08:30.144 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.144 23:21:50 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:30.144 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.144 23:21:50 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.144 23:21:50 -- accel/accel.sh@21 -- # val=0 00:08:30.144 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.144 23:21:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:30.144 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.144 23:21:50 -- accel/accel.sh@21 -- # val='8192 bytes' 00:08:30.144 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.144 23:21:50 -- accel/accel.sh@21 -- # val= 00:08:30.144 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.144 23:21:50 -- accel/accel.sh@21 -- # val=software 00:08:30.144 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.144 23:21:50 -- accel/accel.sh@23 -- # accel_module=software 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.144 23:21:50 -- accel/accel.sh@21 -- # val=32 00:08:30.144 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.144 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.144 23:21:50 -- accel/accel.sh@21 -- # val=32 00:08:30.144 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.145 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.145 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.145 23:21:50 -- accel/accel.sh@21 -- # val=1 00:08:30.145 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.145 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.145 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.145 23:21:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:30.145 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.145 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.145 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.145 23:21:50 -- accel/accel.sh@21 -- # val=Yes 00:08:30.145 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.145 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.145 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.145 23:21:50 -- accel/accel.sh@21 -- # val= 00:08:30.145 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.145 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.145 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:30.145 23:21:50 -- accel/accel.sh@21 -- # val= 00:08:30.145 23:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.145 23:21:50 -- accel/accel.sh@20 -- # IFS=: 00:08:30.145 23:21:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.516 23:21:52 -- accel/accel.sh@21 -- # val= 00:08:31.516 23:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.516 23:21:52 -- accel/accel.sh@20 -- # IFS=: 00:08:31.516 23:21:52 -- accel/accel.sh@20 -- # read -r var val 00:08:31.516 23:21:52 -- accel/accel.sh@21 -- # val= 00:08:31.516 23:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.516 23:21:52 -- accel/accel.sh@20 -- # IFS=: 00:08:31.516 23:21:52 -- accel/accel.sh@20 -- # read -r var val 00:08:31.516 23:21:52 -- accel/accel.sh@21 -- # val= 00:08:31.516 23:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.516 23:21:52 -- accel/accel.sh@20 -- # IFS=: 00:08:31.516 23:21:52 -- accel/accel.sh@20 -- # read -r var val 00:08:31.516 23:21:52 -- accel/accel.sh@21 -- # val= 00:08:31.516 23:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.516 23:21:52 -- accel/accel.sh@20 -- # IFS=: 00:08:31.516 23:21:52 -- accel/accel.sh@20 -- # read -r var val 00:08:31.516 23:21:52 -- accel/accel.sh@21 -- # val= 00:08:31.516 23:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.516 23:21:52 -- accel/accel.sh@20 -- # IFS=: 00:08:31.516 23:21:52 -- accel/accel.sh@20 -- # read -r var val 00:08:31.516 23:21:52 -- accel/accel.sh@21 -- # val= 00:08:31.516 23:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.516 23:21:52 -- accel/accel.sh@20 -- # IFS=: 00:08:31.516 23:21:52 -- accel/accel.sh@20 -- # read -r var val 00:08:31.516 23:21:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:31.516 23:21:52 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:31.516 23:21:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.516 00:08:31.516 real 0m2.919s 00:08:31.516 user 0m2.559s 00:08:31.516 sys 0m0.349s 00:08:31.516 23:21:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.516 23:21:52 -- common/autotest_common.sh@10 -- # set +x 00:08:31.516 ************************************ 00:08:31.516 END TEST accel_copy_crc32c_C2 00:08:31.516 ************************************ 00:08:31.516 23:21:52 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:31.516 23:21:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:31.516 23:21:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:31.516 23:21:52 -- common/autotest_common.sh@10 -- # set +x 00:08:31.516 ************************************ 00:08:31.516 START TEST accel_dualcast 00:08:31.516 ************************************ 00:08:31.516 23:21:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:08:31.516 23:21:52 -- accel/accel.sh@16 -- # local accel_opc 00:08:31.516 23:21:52 -- accel/accel.sh@17 -- # local accel_module 00:08:31.516 23:21:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:08:31.516 23:21:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:31.516 23:21:52 -- accel/accel.sh@12 -- # build_accel_config 00:08:31.516 23:21:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:31.516 23:21:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.516 23:21:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.516 23:21:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:31.516 23:21:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:31.516 23:21:52 -- accel/accel.sh@41 -- # local IFS=, 00:08:31.516 23:21:52 -- accel/accel.sh@42 -- # jq -r . 00:08:31.516 [2024-07-11 23:21:52.151850] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:31.516 [2024-07-11 23:21:52.152019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143594 ] 00:08:31.516 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.516 [2024-07-11 23:21:52.245900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.516 [2024-07-11 23:21:52.341538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.888 23:21:53 -- accel/accel.sh@18 -- # out=' 00:08:32.888 SPDK Configuration: 00:08:32.888 Core mask: 0x1 00:08:32.888 00:08:32.888 Accel Perf Configuration: 00:08:32.888 Workload Type: dualcast 00:08:32.888 Transfer size: 4096 bytes 00:08:32.888 Vector count 1 00:08:32.888 Module: software 00:08:32.888 Queue depth: 32 00:08:32.888 Allocate depth: 32 00:08:32.888 # threads/core: 1 00:08:32.888 Run time: 1 seconds 00:08:32.888 Verify: Yes 00:08:32.888 00:08:32.888 Running for 1 seconds... 00:08:32.888 00:08:32.888 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:32.888 ------------------------------------------------------------------------------------ 00:08:32.888 0,0 297568/s 1162 MiB/s 0 0 00:08:32.888 ==================================================================================== 00:08:32.888 Total 297568/s 1162 MiB/s 0 0' 00:08:32.888 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:32.888 23:21:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:32.888 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:32.888 23:21:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:32.888 23:21:53 -- accel/accel.sh@12 -- # build_accel_config 00:08:32.888 23:21:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:32.888 23:21:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:32.888 23:21:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:32.888 23:21:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:32.888 23:21:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:32.888 23:21:53 -- accel/accel.sh@41 -- # local IFS=, 00:08:32.888 23:21:53 -- accel/accel.sh@42 -- # jq -r . 00:08:32.888 [2024-07-11 23:21:53.598937] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:32.888 [2024-07-11 23:21:53.599107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143742 ] 00:08:32.888 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.888 [2024-07-11 23:21:53.690098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.888 [2024-07-11 23:21:53.785223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val= 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val= 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val=0x1 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val= 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val= 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val=dualcast 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val= 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val=software 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@23 -- # accel_module=software 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val=32 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val=32 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val=1 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val=Yes 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val= 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:33.146 23:21:53 -- accel/accel.sh@21 -- # val= 00:08:33.146 23:21:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # IFS=: 00:08:33.146 23:21:53 -- accel/accel.sh@20 -- # read -r var val 00:08:34.079 23:21:55 -- accel/accel.sh@21 -- # val= 00:08:34.079 23:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.079 23:21:55 -- accel/accel.sh@20 -- # IFS=: 00:08:34.079 23:21:55 -- accel/accel.sh@20 -- # read -r var val 00:08:34.079 23:21:55 -- accel/accel.sh@21 -- # val= 00:08:34.079 23:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.079 23:21:55 -- accel/accel.sh@20 -- # IFS=: 00:08:34.079 23:21:55 -- accel/accel.sh@20 -- # read -r var val 00:08:34.079 23:21:55 -- accel/accel.sh@21 -- # val= 00:08:34.079 23:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.079 23:21:55 -- accel/accel.sh@20 -- # IFS=: 00:08:34.079 23:21:55 -- accel/accel.sh@20 -- # read -r var val 00:08:34.079 23:21:55 -- accel/accel.sh@21 -- # val= 00:08:34.079 23:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.079 23:21:55 -- accel/accel.sh@20 -- # IFS=: 00:08:34.079 23:21:55 -- accel/accel.sh@20 -- # read -r var val 00:08:34.079 23:21:55 -- accel/accel.sh@21 -- # val= 00:08:34.079 23:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.079 23:21:55 -- accel/accel.sh@20 -- # IFS=: 00:08:34.079 23:21:55 -- accel/accel.sh@20 -- # read -r var val 00:08:34.079 23:21:55 -- accel/accel.sh@21 -- # val= 00:08:34.079 23:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.079 23:21:55 -- accel/accel.sh@20 -- # IFS=: 00:08:34.079 23:21:55 -- accel/accel.sh@20 -- # read -r var val 00:08:34.079 23:21:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:34.079 23:21:55 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:08:34.079 23:21:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.079 00:08:34.079 real 0m2.901s 00:08:34.079 user 0m2.548s 00:08:34.079 sys 0m0.343s 00:08:34.079 23:21:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.079 23:21:55 -- common/autotest_common.sh@10 -- # set +x 00:08:34.079 ************************************ 00:08:34.079 END TEST accel_dualcast 00:08:34.079 ************************************ 00:08:34.338 23:21:55 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:34.338 23:21:55 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:34.338 23:21:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.338 23:21:55 -- common/autotest_common.sh@10 -- # set +x 00:08:34.338 ************************************ 00:08:34.338 START TEST accel_compare 00:08:34.338 ************************************ 00:08:34.338 23:21:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:08:34.338 23:21:55 -- accel/accel.sh@16 -- # local accel_opc 00:08:34.338 23:21:55 -- accel/accel.sh@17 -- # local accel_module 00:08:34.338 23:21:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:08:34.338 23:21:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:34.338 23:21:55 -- accel/accel.sh@12 -- # build_accel_config 00:08:34.338 23:21:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:34.338 23:21:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.338 23:21:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.338 23:21:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:34.338 23:21:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:34.338 23:21:55 -- accel/accel.sh@41 -- # local IFS=, 00:08:34.338 23:21:55 -- accel/accel.sh@42 -- # jq -r . 00:08:34.338 [2024-07-11 23:21:55.077774] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:34.338 [2024-07-11 23:21:55.077861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144014 ] 00:08:34.338 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.338 [2024-07-11 23:21:55.156051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.338 [2024-07-11 23:21:55.245462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.710 23:21:56 -- accel/accel.sh@18 -- # out=' 00:08:35.710 SPDK Configuration: 00:08:35.710 Core mask: 0x1 00:08:35.710 00:08:35.710 Accel Perf Configuration: 00:08:35.710 Workload Type: compare 00:08:35.710 Transfer size: 4096 bytes 00:08:35.710 Vector count 1 00:08:35.710 Module: software 00:08:35.710 Queue depth: 32 00:08:35.710 Allocate depth: 32 00:08:35.710 # threads/core: 1 00:08:35.710 Run time: 1 seconds 00:08:35.710 Verify: Yes 00:08:35.710 00:08:35.710 Running for 1 seconds... 00:08:35.710 00:08:35.710 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:35.710 ------------------------------------------------------------------------------------ 00:08:35.710 0,0 399808/s 1561 MiB/s 0 0 00:08:35.710 ==================================================================================== 00:08:35.710 Total 399808/s 1561 MiB/s 0 0' 00:08:35.710 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.710 23:21:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:35.710 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.710 23:21:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:35.710 23:21:56 -- accel/accel.sh@12 -- # build_accel_config 00:08:35.710 23:21:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:35.710 23:21:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.710 23:21:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.710 23:21:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:35.710 23:21:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:35.710 23:21:56 -- accel/accel.sh@41 -- # local IFS=, 00:08:35.710 23:21:56 -- accel/accel.sh@42 -- # jq -r . 00:08:35.710 [2024-07-11 23:21:56.519041] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:35.710 [2024-07-11 23:21:56.519219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144160 ] 00:08:35.710 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.710 [2024-07-11 23:21:56.612449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.969 [2024-07-11 23:21:56.706507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val= 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val= 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val=0x1 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val= 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val= 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val=compare 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@24 -- # accel_opc=compare 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val= 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val=software 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@23 -- # accel_module=software 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val=32 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val=32 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val=1 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val=Yes 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val= 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:35.969 23:21:56 -- accel/accel.sh@21 -- # val= 00:08:35.969 23:21:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # IFS=: 00:08:35.969 23:21:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.339 23:21:57 -- accel/accel.sh@21 -- # val= 00:08:37.339 23:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.339 23:21:57 -- accel/accel.sh@20 -- # IFS=: 00:08:37.339 23:21:57 -- accel/accel.sh@20 -- # read -r var val 00:08:37.339 23:21:57 -- accel/accel.sh@21 -- # val= 00:08:37.339 23:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.339 23:21:57 -- accel/accel.sh@20 -- # IFS=: 00:08:37.339 23:21:57 -- accel/accel.sh@20 -- # read -r var val 00:08:37.339 23:21:57 -- accel/accel.sh@21 -- # val= 00:08:37.339 23:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.339 23:21:57 -- accel/accel.sh@20 -- # IFS=: 00:08:37.339 23:21:57 -- accel/accel.sh@20 -- # read -r var val 00:08:37.339 23:21:57 -- accel/accel.sh@21 -- # val= 00:08:37.339 23:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.339 23:21:57 -- accel/accel.sh@20 -- # IFS=: 00:08:37.339 23:21:57 -- accel/accel.sh@20 -- # read -r var val 00:08:37.339 23:21:57 -- accel/accel.sh@21 -- # val= 00:08:37.339 23:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.339 23:21:57 -- accel/accel.sh@20 -- # IFS=: 00:08:37.339 23:21:57 -- accel/accel.sh@20 -- # read -r var val 00:08:37.339 23:21:57 -- accel/accel.sh@21 -- # val= 00:08:37.339 23:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.339 23:21:57 -- accel/accel.sh@20 -- # IFS=: 00:08:37.339 23:21:57 -- accel/accel.sh@20 -- # read -r var val 00:08:37.339 23:21:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:37.339 23:21:57 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:08:37.339 23:21:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:37.339 00:08:37.339 real 0m2.892s 00:08:37.339 user 0m2.556s 00:08:37.339 sys 0m0.326s 00:08:37.339 23:21:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.339 23:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:37.339 ************************************ 00:08:37.339 END TEST accel_compare 00:08:37.339 ************************************ 00:08:37.339 23:21:57 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:37.339 23:21:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:37.339 23:21:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.339 23:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:37.339 ************************************ 00:08:37.339 START TEST accel_xor 00:08:37.339 ************************************ 00:08:37.339 23:21:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:08:37.339 23:21:57 -- accel/accel.sh@16 -- # local accel_opc 00:08:37.339 23:21:57 -- accel/accel.sh@17 -- # local accel_module 00:08:37.339 23:21:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:08:37.339 23:21:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:37.339 23:21:57 -- accel/accel.sh@12 -- # build_accel_config 00:08:37.339 23:21:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:37.339 23:21:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.339 23:21:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.339 23:21:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:37.339 23:21:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:37.339 23:21:57 -- accel/accel.sh@41 -- # local IFS=, 00:08:37.339 23:21:57 -- accel/accel.sh@42 -- # jq -r . 00:08:37.339 [2024-07-11 23:21:57.999045] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:37.340 [2024-07-11 23:21:57.999129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144315 ] 00:08:37.340 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.340 [2024-07-11 23:21:58.065751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.340 [2024-07-11 23:21:58.160516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.710 23:21:59 -- accel/accel.sh@18 -- # out=' 00:08:38.710 SPDK Configuration: 00:08:38.710 Core mask: 0x1 00:08:38.710 00:08:38.710 Accel Perf Configuration: 00:08:38.710 Workload Type: xor 00:08:38.710 Source buffers: 2 00:08:38.710 Transfer size: 4096 bytes 00:08:38.710 Vector count 1 00:08:38.710 Module: software 00:08:38.710 Queue depth: 32 00:08:38.710 Allocate depth: 32 00:08:38.710 # threads/core: 1 00:08:38.710 Run time: 1 seconds 00:08:38.710 Verify: Yes 00:08:38.710 00:08:38.710 Running for 1 seconds... 00:08:38.710 00:08:38.710 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:38.710 ------------------------------------------------------------------------------------ 00:08:38.710 0,0 192672/s 752 MiB/s 0 0 00:08:38.710 ==================================================================================== 00:08:38.710 Total 192672/s 752 MiB/s 0 0' 00:08:38.710 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.710 23:21:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:38.710 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.710 23:21:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:38.710 23:21:59 -- accel/accel.sh@12 -- # build_accel_config 00:08:38.710 23:21:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:38.710 23:21:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:38.710 23:21:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:38.710 23:21:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:38.710 23:21:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:38.710 23:21:59 -- accel/accel.sh@41 -- # local IFS=, 00:08:38.710 23:21:59 -- accel/accel.sh@42 -- # jq -r . 00:08:38.710 [2024-07-11 23:21:59.426816] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:38.710 [2024-07-11 23:21:59.426915] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144476 ] 00:08:38.710 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.710 [2024-07-11 23:21:59.515346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.710 [2024-07-11 23:21:59.608669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val= 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val= 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val=0x1 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val= 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val= 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val=xor 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val=2 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val= 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val=software 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@23 -- # accel_module=software 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val=32 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val=32 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val=1 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val=Yes 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val= 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:38.967 23:21:59 -- accel/accel.sh@21 -- # val= 00:08:38.967 23:21:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # IFS=: 00:08:38.967 23:21:59 -- accel/accel.sh@20 -- # read -r var val 00:08:39.899 23:22:00 -- accel/accel.sh@21 -- # val= 00:08:40.224 23:22:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.224 23:22:00 -- accel/accel.sh@20 -- # IFS=: 00:08:40.224 23:22:00 -- accel/accel.sh@20 -- # read -r var val 00:08:40.224 23:22:00 -- accel/accel.sh@21 -- # val= 00:08:40.224 23:22:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.224 23:22:00 -- accel/accel.sh@20 -- # IFS=: 00:08:40.224 23:22:00 -- accel/accel.sh@20 -- # read -r var val 00:08:40.224 23:22:00 -- accel/accel.sh@21 -- # val= 00:08:40.224 23:22:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.224 23:22:00 -- accel/accel.sh@20 -- # IFS=: 00:08:40.224 23:22:00 -- accel/accel.sh@20 -- # read -r var val 00:08:40.224 23:22:00 -- accel/accel.sh@21 -- # val= 00:08:40.224 23:22:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.224 23:22:00 -- accel/accel.sh@20 -- # IFS=: 00:08:40.224 23:22:00 -- accel/accel.sh@20 -- # read -r var val 00:08:40.224 23:22:00 -- accel/accel.sh@21 -- # val= 00:08:40.224 23:22:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.224 23:22:00 -- accel/accel.sh@20 -- # IFS=: 00:08:40.224 23:22:00 -- accel/accel.sh@20 -- # read -r var val 00:08:40.224 23:22:00 -- accel/accel.sh@21 -- # val= 00:08:40.224 23:22:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.224 23:22:00 -- accel/accel.sh@20 -- # IFS=: 00:08:40.224 23:22:00 -- accel/accel.sh@20 -- # read -r var val 00:08:40.224 23:22:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:40.224 23:22:00 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:40.224 23:22:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:40.224 00:08:40.224 real 0m2.873s 00:08:40.224 user 0m2.535s 00:08:40.224 sys 0m0.328s 00:08:40.224 23:22:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.224 23:22:00 -- common/autotest_common.sh@10 -- # set +x 00:08:40.224 ************************************ 00:08:40.224 END TEST accel_xor 00:08:40.224 ************************************ 00:08:40.224 23:22:00 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:40.224 23:22:00 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:40.224 23:22:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.224 23:22:00 -- common/autotest_common.sh@10 -- # set +x 00:08:40.224 ************************************ 00:08:40.224 START TEST accel_xor 00:08:40.224 ************************************ 00:08:40.224 23:22:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:08:40.224 23:22:00 -- accel/accel.sh@16 -- # local accel_opc 00:08:40.224 23:22:00 -- accel/accel.sh@17 -- # local accel_module 00:08:40.224 23:22:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:08:40.224 23:22:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:40.225 23:22:00 -- accel/accel.sh@12 -- # build_accel_config 00:08:40.225 23:22:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:40.225 23:22:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:40.225 23:22:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:40.225 23:22:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:40.225 23:22:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:40.225 23:22:00 -- accel/accel.sh@41 -- # local IFS=, 00:08:40.225 23:22:00 -- accel/accel.sh@42 -- # jq -r . 00:08:40.225 [2024-07-11 23:22:00.916257] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:40.225 [2024-07-11 23:22:00.916338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144741 ] 00:08:40.225 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.225 [2024-07-11 23:22:01.001698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.225 [2024-07-11 23:22:01.094869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.608 23:22:02 -- accel/accel.sh@18 -- # out=' 00:08:41.608 SPDK Configuration: 00:08:41.608 Core mask: 0x1 00:08:41.608 00:08:41.608 Accel Perf Configuration: 00:08:41.608 Workload Type: xor 00:08:41.608 Source buffers: 3 00:08:41.608 Transfer size: 4096 bytes 00:08:41.608 Vector count 1 00:08:41.608 Module: software 00:08:41.608 Queue depth: 32 00:08:41.608 Allocate depth: 32 00:08:41.608 # threads/core: 1 00:08:41.608 Run time: 1 seconds 00:08:41.608 Verify: Yes 00:08:41.608 00:08:41.608 Running for 1 seconds... 00:08:41.608 00:08:41.608 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:41.608 ------------------------------------------------------------------------------------ 00:08:41.608 0,0 183424/s 716 MiB/s 0 0 00:08:41.608 ==================================================================================== 00:08:41.608 Total 183424/s 716 MiB/s 0 0' 00:08:41.608 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.608 23:22:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:41.608 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.608 23:22:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:41.608 23:22:02 -- accel/accel.sh@12 -- # build_accel_config 00:08:41.608 23:22:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:41.608 23:22:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:41.608 23:22:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:41.608 23:22:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:41.608 23:22:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:41.608 23:22:02 -- accel/accel.sh@41 -- # local IFS=, 00:08:41.608 23:22:02 -- accel/accel.sh@42 -- # jq -r . 00:08:41.608 [2024-07-11 23:22:02.368014] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:41.608 [2024-07-11 23:22:02.368216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144882 ] 00:08:41.608 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.608 [2024-07-11 23:22:02.463997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.608 [2024-07-11 23:22:02.557598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val= 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val= 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val=0x1 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val= 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val= 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val=xor 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val=3 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val= 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val=software 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@23 -- # accel_module=software 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val=32 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val=32 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val=1 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val=Yes 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val= 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:41.865 23:22:02 -- accel/accel.sh@21 -- # val= 00:08:41.865 23:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # IFS=: 00:08:41.865 23:22:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.249 23:22:03 -- accel/accel.sh@21 -- # val= 00:08:43.249 23:22:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.249 23:22:03 -- accel/accel.sh@20 -- # IFS=: 00:08:43.249 23:22:03 -- accel/accel.sh@20 -- # read -r var val 00:08:43.249 23:22:03 -- accel/accel.sh@21 -- # val= 00:08:43.249 23:22:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.249 23:22:03 -- accel/accel.sh@20 -- # IFS=: 00:08:43.249 23:22:03 -- accel/accel.sh@20 -- # read -r var val 00:08:43.249 23:22:03 -- accel/accel.sh@21 -- # val= 00:08:43.249 23:22:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.249 23:22:03 -- accel/accel.sh@20 -- # IFS=: 00:08:43.249 23:22:03 -- accel/accel.sh@20 -- # read -r var val 00:08:43.249 23:22:03 -- accel/accel.sh@21 -- # val= 00:08:43.249 23:22:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.249 23:22:03 -- accel/accel.sh@20 -- # IFS=: 00:08:43.249 23:22:03 -- accel/accel.sh@20 -- # read -r var val 00:08:43.249 23:22:03 -- accel/accel.sh@21 -- # val= 00:08:43.249 23:22:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.249 23:22:03 -- accel/accel.sh@20 -- # IFS=: 00:08:43.249 23:22:03 -- accel/accel.sh@20 -- # read -r var val 00:08:43.249 23:22:03 -- accel/accel.sh@21 -- # val= 00:08:43.249 23:22:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.249 23:22:03 -- accel/accel.sh@20 -- # IFS=: 00:08:43.249 23:22:03 -- accel/accel.sh@20 -- # read -r var val 00:08:43.249 23:22:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:43.249 23:22:03 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:43.249 23:22:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:43.249 00:08:43.249 real 0m2.914s 00:08:43.249 user 0m2.542s 00:08:43.249 sys 0m0.361s 00:08:43.249 23:22:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.249 23:22:03 -- common/autotest_common.sh@10 -- # set +x 00:08:43.249 ************************************ 00:08:43.249 END TEST accel_xor 00:08:43.249 ************************************ 00:08:43.249 23:22:03 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:43.249 23:22:03 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:43.249 23:22:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:43.249 23:22:03 -- common/autotest_common.sh@10 -- # set +x 00:08:43.249 ************************************ 00:08:43.249 START TEST accel_dif_verify 00:08:43.249 ************************************ 00:08:43.249 23:22:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:08:43.249 23:22:03 -- accel/accel.sh@16 -- # local accel_opc 00:08:43.249 23:22:03 -- accel/accel.sh@17 -- # local accel_module 00:08:43.249 23:22:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:08:43.249 23:22:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:43.249 23:22:03 -- accel/accel.sh@12 -- # build_accel_config 00:08:43.249 23:22:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:43.249 23:22:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:43.249 23:22:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:43.249 23:22:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:43.249 23:22:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:43.249 23:22:03 -- accel/accel.sh@41 -- # local IFS=, 00:08:43.249 23:22:03 -- accel/accel.sh@42 -- # jq -r . 00:08:43.249 [2024-07-11 23:22:03.865972] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:43.249 [2024-07-11 23:22:03.866155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145042 ] 00:08:43.249 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.249 [2024-07-11 23:22:03.960519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.249 [2024-07-11 23:22:04.052733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.623 23:22:05 -- accel/accel.sh@18 -- # out=' 00:08:44.623 SPDK Configuration: 00:08:44.623 Core mask: 0x1 00:08:44.623 00:08:44.623 Accel Perf Configuration: 00:08:44.623 Workload Type: dif_verify 00:08:44.623 Vector size: 4096 bytes 00:08:44.623 Transfer size: 4096 bytes 00:08:44.623 Block size: 512 bytes 00:08:44.623 Metadata size: 8 bytes 00:08:44.623 Vector count 1 00:08:44.623 Module: software 00:08:44.623 Queue depth: 32 00:08:44.623 Allocate depth: 32 00:08:44.623 # threads/core: 1 00:08:44.623 Run time: 1 seconds 00:08:44.623 Verify: No 00:08:44.623 00:08:44.623 Running for 1 seconds... 00:08:44.623 00:08:44.623 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:44.623 ------------------------------------------------------------------------------------ 00:08:44.623 0,0 81920/s 325 MiB/s 0 0 00:08:44.623 ==================================================================================== 00:08:44.623 Total 81920/s 320 MiB/s 0 0' 00:08:44.623 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.623 23:22:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:44.623 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.623 23:22:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:44.623 23:22:05 -- accel/accel.sh@12 -- # build_accel_config 00:08:44.623 23:22:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:44.623 23:22:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.623 23:22:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.623 23:22:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:44.623 23:22:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:44.623 23:22:05 -- accel/accel.sh@41 -- # local IFS=, 00:08:44.623 23:22:05 -- accel/accel.sh@42 -- # jq -r . 00:08:44.623 [2024-07-11 23:22:05.320114] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:44.624 [2024-07-11 23:22:05.320286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145257 ] 00:08:44.624 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.624 [2024-07-11 23:22:05.415043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.624 [2024-07-11 23:22:05.508133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.624 23:22:05 -- accel/accel.sh@21 -- # val= 00:08:44.624 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.624 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.624 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.624 23:22:05 -- accel/accel.sh@21 -- # val= 00:08:44.624 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.624 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.624 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.624 23:22:05 -- accel/accel.sh@21 -- # val=0x1 00:08:44.624 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.624 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.624 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.624 23:22:05 -- accel/accel.sh@21 -- # val= 00:08:44.624 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.624 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.624 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.624 23:22:05 -- accel/accel.sh@21 -- # val= 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.882 23:22:05 -- accel/accel.sh@21 -- # val=dif_verify 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.882 23:22:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.882 23:22:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.882 23:22:05 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.882 23:22:05 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.882 23:22:05 -- accel/accel.sh@21 -- # val= 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.882 23:22:05 -- accel/accel.sh@21 -- # val=software 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@23 -- # accel_module=software 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.882 23:22:05 -- accel/accel.sh@21 -- # val=32 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.882 23:22:05 -- accel/accel.sh@21 -- # val=32 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.882 23:22:05 -- accel/accel.sh@21 -- # val=1 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.882 23:22:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.882 23:22:05 -- accel/accel.sh@21 -- # val=No 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.882 23:22:05 -- accel/accel.sh@21 -- # val= 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.882 23:22:05 -- accel/accel.sh@21 -- # val= 00:08:44.882 23:22:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.882 23:22:05 -- accel/accel.sh@20 -- # read -r var val 00:08:45.816 23:22:06 -- accel/accel.sh@21 -- # val= 00:08:45.816 23:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.816 23:22:06 -- accel/accel.sh@20 -- # IFS=: 00:08:45.816 23:22:06 -- accel/accel.sh@20 -- # read -r var val 00:08:45.816 23:22:06 -- accel/accel.sh@21 -- # val= 00:08:45.816 23:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.816 23:22:06 -- accel/accel.sh@20 -- # IFS=: 00:08:45.816 23:22:06 -- accel/accel.sh@20 -- # read -r var val 00:08:45.816 23:22:06 -- accel/accel.sh@21 -- # val= 00:08:45.816 23:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.816 23:22:06 -- accel/accel.sh@20 -- # IFS=: 00:08:45.816 23:22:06 -- accel/accel.sh@20 -- # read -r var val 00:08:45.816 23:22:06 -- accel/accel.sh@21 -- # val= 00:08:45.816 23:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.816 23:22:06 -- accel/accel.sh@20 -- # IFS=: 00:08:45.816 23:22:06 -- accel/accel.sh@20 -- # read -r var val 00:08:45.816 23:22:06 -- accel/accel.sh@21 -- # val= 00:08:45.816 23:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.816 23:22:06 -- accel/accel.sh@20 -- # IFS=: 00:08:45.816 23:22:06 -- accel/accel.sh@20 -- # read -r var val 00:08:45.816 23:22:06 -- accel/accel.sh@21 -- # val= 00:08:45.816 23:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.816 23:22:06 -- accel/accel.sh@20 -- # IFS=: 00:08:45.816 23:22:06 -- accel/accel.sh@20 -- # read -r var val 00:08:45.816 23:22:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:45.816 23:22:06 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:08:45.816 23:22:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:45.816 00:08:45.816 real 0m2.917s 00:08:45.817 user 0m2.565s 00:08:45.817 sys 0m0.344s 00:08:45.817 23:22:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.817 23:22:06 -- common/autotest_common.sh@10 -- # set +x 00:08:45.817 ************************************ 00:08:45.817 END TEST accel_dif_verify 00:08:45.817 ************************************ 00:08:46.075 23:22:06 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:46.075 23:22:06 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:46.075 23:22:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:46.075 23:22:06 -- common/autotest_common.sh@10 -- # set +x 00:08:46.075 ************************************ 00:08:46.075 START TEST accel_dif_generate 00:08:46.075 ************************************ 00:08:46.075 23:22:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:08:46.075 23:22:06 -- accel/accel.sh@16 -- # local accel_opc 00:08:46.075 23:22:06 -- accel/accel.sh@17 -- # local accel_module 00:08:46.075 23:22:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:08:46.075 23:22:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:46.075 23:22:06 -- accel/accel.sh@12 -- # build_accel_config 00:08:46.075 23:22:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:46.075 23:22:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:46.075 23:22:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:46.075 23:22:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:46.075 23:22:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:46.075 23:22:06 -- accel/accel.sh@41 -- # local IFS=, 00:08:46.075 23:22:06 -- accel/accel.sh@42 -- # jq -r . 00:08:46.075 [2024-07-11 23:22:06.811813] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:46.075 [2024-07-11 23:22:06.811902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145466 ] 00:08:46.075 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.075 [2024-07-11 23:22:06.895596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.075 [2024-07-11 23:22:06.986110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.452 23:22:08 -- accel/accel.sh@18 -- # out=' 00:08:47.452 SPDK Configuration: 00:08:47.452 Core mask: 0x1 00:08:47.452 00:08:47.452 Accel Perf Configuration: 00:08:47.452 Workload Type: dif_generate 00:08:47.452 Vector size: 4096 bytes 00:08:47.452 Transfer size: 4096 bytes 00:08:47.452 Block size: 512 bytes 00:08:47.452 Metadata size: 8 bytes 00:08:47.452 Vector count 1 00:08:47.452 Module: software 00:08:47.452 Queue depth: 32 00:08:47.452 Allocate depth: 32 00:08:47.452 # threads/core: 1 00:08:47.452 Run time: 1 seconds 00:08:47.452 Verify: No 00:08:47.452 00:08:47.452 Running for 1 seconds... 00:08:47.452 00:08:47.452 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:47.452 ------------------------------------------------------------------------------------ 00:08:47.452 0,0 96288/s 382 MiB/s 0 0 00:08:47.452 ==================================================================================== 00:08:47.452 Total 96288/s 376 MiB/s 0 0' 00:08:47.452 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.452 23:22:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:47.452 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.452 23:22:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:47.452 23:22:08 -- accel/accel.sh@12 -- # build_accel_config 00:08:47.452 23:22:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:47.452 23:22:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:47.452 23:22:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:47.452 23:22:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:47.452 23:22:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:47.452 23:22:08 -- accel/accel.sh@41 -- # local IFS=, 00:08:47.452 23:22:08 -- accel/accel.sh@42 -- # jq -r . 00:08:47.452 [2024-07-11 23:22:08.259564] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:47.452 [2024-07-11 23:22:08.259751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145604 ] 00:08:47.452 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.452 [2024-07-11 23:22:08.353975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.711 [2024-07-11 23:22:08.449694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val= 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val= 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val=0x1 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val= 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val= 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val=dif_generate 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val= 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val=software 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@23 -- # accel_module=software 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val=32 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val=32 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val=1 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val=No 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val= 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.711 23:22:08 -- accel/accel.sh@21 -- # val= 00:08:47.711 23:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.711 23:22:08 -- accel/accel.sh@20 -- # read -r var val 00:08:49.083 23:22:09 -- accel/accel.sh@21 -- # val= 00:08:49.083 23:22:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.083 23:22:09 -- accel/accel.sh@20 -- # IFS=: 00:08:49.083 23:22:09 -- accel/accel.sh@20 -- # read -r var val 00:08:49.083 23:22:09 -- accel/accel.sh@21 -- # val= 00:08:49.083 23:22:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.083 23:22:09 -- accel/accel.sh@20 -- # IFS=: 00:08:49.083 23:22:09 -- accel/accel.sh@20 -- # read -r var val 00:08:49.083 23:22:09 -- accel/accel.sh@21 -- # val= 00:08:49.083 23:22:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.083 23:22:09 -- accel/accel.sh@20 -- # IFS=: 00:08:49.083 23:22:09 -- accel/accel.sh@20 -- # read -r var val 00:08:49.083 23:22:09 -- accel/accel.sh@21 -- # val= 00:08:49.083 23:22:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.083 23:22:09 -- accel/accel.sh@20 -- # IFS=: 00:08:49.083 23:22:09 -- accel/accel.sh@20 -- # read -r var val 00:08:49.083 23:22:09 -- accel/accel.sh@21 -- # val= 00:08:49.083 23:22:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.083 23:22:09 -- accel/accel.sh@20 -- # IFS=: 00:08:49.083 23:22:09 -- accel/accel.sh@20 -- # read -r var val 00:08:49.083 23:22:09 -- accel/accel.sh@21 -- # val= 00:08:49.083 23:22:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.083 23:22:09 -- accel/accel.sh@20 -- # IFS=: 00:08:49.083 23:22:09 -- accel/accel.sh@20 -- # read -r var val 00:08:49.083 23:22:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:49.083 23:22:09 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:08:49.083 23:22:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:49.083 00:08:49.083 real 0m2.902s 00:08:49.083 user 0m2.556s 00:08:49.083 sys 0m0.340s 00:08:49.083 23:22:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.083 23:22:09 -- common/autotest_common.sh@10 -- # set +x 00:08:49.083 ************************************ 00:08:49.083 END TEST accel_dif_generate 00:08:49.083 ************************************ 00:08:49.083 23:22:09 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:49.083 23:22:09 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:49.083 23:22:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:49.083 23:22:09 -- common/autotest_common.sh@10 -- # set +x 00:08:49.083 ************************************ 00:08:49.083 START TEST accel_dif_generate_copy 00:08:49.083 ************************************ 00:08:49.083 23:22:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:08:49.083 23:22:09 -- accel/accel.sh@16 -- # local accel_opc 00:08:49.083 23:22:09 -- accel/accel.sh@17 -- # local accel_module 00:08:49.083 23:22:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:08:49.083 23:22:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:49.083 23:22:09 -- accel/accel.sh@12 -- # build_accel_config 00:08:49.083 23:22:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:49.083 23:22:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:49.083 23:22:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:49.083 23:22:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:49.083 23:22:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:49.083 23:22:09 -- accel/accel.sh@41 -- # local IFS=, 00:08:49.083 23:22:09 -- accel/accel.sh@42 -- # jq -r . 00:08:49.083 [2024-07-11 23:22:09.743380] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:49.083 [2024-07-11 23:22:09.743464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145767 ] 00:08:49.083 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.083 [2024-07-11 23:22:09.813332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.083 [2024-07-11 23:22:09.907200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.456 23:22:11 -- accel/accel.sh@18 -- # out=' 00:08:50.456 SPDK Configuration: 00:08:50.456 Core mask: 0x1 00:08:50.456 00:08:50.456 Accel Perf Configuration: 00:08:50.456 Workload Type: dif_generate_copy 00:08:50.456 Vector size: 4096 bytes 00:08:50.456 Transfer size: 4096 bytes 00:08:50.456 Vector count 1 00:08:50.456 Module: software 00:08:50.456 Queue depth: 32 00:08:50.456 Allocate depth: 32 00:08:50.456 # threads/core: 1 00:08:50.456 Run time: 1 seconds 00:08:50.456 Verify: No 00:08:50.456 00:08:50.456 Running for 1 seconds... 00:08:50.456 00:08:50.456 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:50.456 ------------------------------------------------------------------------------------ 00:08:50.456 0,0 75744/s 300 MiB/s 0 0 00:08:50.456 ==================================================================================== 00:08:50.456 Total 75744/s 295 MiB/s 0 0' 00:08:50.456 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.456 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.456 23:22:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:50.456 23:22:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:50.456 23:22:11 -- accel/accel.sh@12 -- # build_accel_config 00:08:50.456 23:22:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:50.456 23:22:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:50.456 23:22:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:50.456 23:22:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:50.456 23:22:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:50.456 23:22:11 -- accel/accel.sh@41 -- # local IFS=, 00:08:50.456 23:22:11 -- accel/accel.sh@42 -- # jq -r . 00:08:50.456 [2024-07-11 23:22:11.176859] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:50.456 [2024-07-11 23:22:11.177028] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146028 ] 00:08:50.456 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.456 [2024-07-11 23:22:11.271036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.456 [2024-07-11 23:22:11.364986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val= 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val= 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val=0x1 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val= 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val= 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val= 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val=software 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@23 -- # accel_module=software 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val=32 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val=32 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val=1 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val=No 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val= 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:50.716 23:22:11 -- accel/accel.sh@21 -- # val= 00:08:50.716 23:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # IFS=: 00:08:50.716 23:22:11 -- accel/accel.sh@20 -- # read -r var val 00:08:52.095 23:22:12 -- accel/accel.sh@21 -- # val= 00:08:52.095 23:22:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.095 23:22:12 -- accel/accel.sh@20 -- # IFS=: 00:08:52.095 23:22:12 -- accel/accel.sh@20 -- # read -r var val 00:08:52.095 23:22:12 -- accel/accel.sh@21 -- # val= 00:08:52.095 23:22:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.095 23:22:12 -- accel/accel.sh@20 -- # IFS=: 00:08:52.095 23:22:12 -- accel/accel.sh@20 -- # read -r var val 00:08:52.095 23:22:12 -- accel/accel.sh@21 -- # val= 00:08:52.095 23:22:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.095 23:22:12 -- accel/accel.sh@20 -- # IFS=: 00:08:52.095 23:22:12 -- accel/accel.sh@20 -- # read -r var val 00:08:52.095 23:22:12 -- accel/accel.sh@21 -- # val= 00:08:52.095 23:22:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.095 23:22:12 -- accel/accel.sh@20 -- # IFS=: 00:08:52.095 23:22:12 -- accel/accel.sh@20 -- # read -r var val 00:08:52.095 23:22:12 -- accel/accel.sh@21 -- # val= 00:08:52.095 23:22:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.095 23:22:12 -- accel/accel.sh@20 -- # IFS=: 00:08:52.095 23:22:12 -- accel/accel.sh@20 -- # read -r var val 00:08:52.095 23:22:12 -- accel/accel.sh@21 -- # val= 00:08:52.095 23:22:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.095 23:22:12 -- accel/accel.sh@20 -- # IFS=: 00:08:52.095 23:22:12 -- accel/accel.sh@20 -- # read -r var val 00:08:52.095 23:22:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:52.095 23:22:12 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:08:52.095 23:22:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:52.095 00:08:52.095 real 0m2.885s 00:08:52.095 user 0m2.552s 00:08:52.095 sys 0m0.324s 00:08:52.095 23:22:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.095 23:22:12 -- common/autotest_common.sh@10 -- # set +x 00:08:52.095 ************************************ 00:08:52.095 END TEST accel_dif_generate_copy 00:08:52.095 ************************************ 00:08:52.095 23:22:12 -- accel/accel.sh@107 -- # [[ y == y ]] 00:08:52.095 23:22:12 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:52.095 23:22:12 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:52.095 23:22:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:52.095 23:22:12 -- common/autotest_common.sh@10 -- # set +x 00:08:52.095 ************************************ 00:08:52.095 START TEST accel_comp 00:08:52.095 ************************************ 00:08:52.095 23:22:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:52.095 23:22:12 -- accel/accel.sh@16 -- # local accel_opc 00:08:52.095 23:22:12 -- accel/accel.sh@17 -- # local accel_module 00:08:52.095 23:22:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:52.095 23:22:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:52.095 23:22:12 -- accel/accel.sh@12 -- # build_accel_config 00:08:52.095 23:22:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:52.095 23:22:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:52.095 23:22:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:52.095 23:22:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:52.095 23:22:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:52.095 23:22:12 -- accel/accel.sh@41 -- # local IFS=, 00:08:52.095 23:22:12 -- accel/accel.sh@42 -- # jq -r . 00:08:52.095 [2024-07-11 23:22:12.661358] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:52.095 [2024-07-11 23:22:12.661462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146189 ] 00:08:52.095 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.095 [2024-07-11 23:22:12.728493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.095 [2024-07-11 23:22:12.821517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.469 23:22:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:53.469 00:08:53.469 SPDK Configuration: 00:08:53.469 Core mask: 0x1 00:08:53.469 00:08:53.469 Accel Perf Configuration: 00:08:53.469 Workload Type: compress 00:08:53.469 Transfer size: 4096 bytes 00:08:53.469 Vector count 1 00:08:53.469 Module: software 00:08:53.469 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:53.469 Queue depth: 32 00:08:53.469 Allocate depth: 32 00:08:53.469 # threads/core: 1 00:08:53.469 Run time: 1 seconds 00:08:53.469 Verify: No 00:08:53.469 00:08:53.469 Running for 1 seconds... 00:08:53.469 00:08:53.469 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:53.469 ------------------------------------------------------------------------------------ 00:08:53.469 0,0 32448/s 135 MiB/s 0 0 00:08:53.469 ==================================================================================== 00:08:53.469 Total 32448/s 126 MiB/s 0 0' 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:53.469 23:22:14 -- accel/accel.sh@12 -- # build_accel_config 00:08:53.469 23:22:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:53.469 23:22:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:53.469 23:22:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:53.469 23:22:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:53.469 23:22:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:53.469 23:22:14 -- accel/accel.sh@41 -- # local IFS=, 00:08:53.469 23:22:14 -- accel/accel.sh@42 -- # jq -r . 00:08:53.469 [2024-07-11 23:22:14.095507] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:53.469 [2024-07-11 23:22:14.095677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146331 ] 00:08:53.469 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.469 [2024-07-11 23:22:14.189120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.469 [2024-07-11 23:22:14.282538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val= 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val= 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val= 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val=0x1 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val= 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val= 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val=compress 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@24 -- # accel_opc=compress 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val= 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val=software 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@23 -- # accel_module=software 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val=32 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val=32 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val=1 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.469 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.469 23:22:14 -- accel/accel.sh@21 -- # val=No 00:08:53.469 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.470 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.470 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.470 23:22:14 -- accel/accel.sh@21 -- # val= 00:08:53.470 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.470 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.470 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:53.470 23:22:14 -- accel/accel.sh@21 -- # val= 00:08:53.470 23:22:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.470 23:22:14 -- accel/accel.sh@20 -- # IFS=: 00:08:53.470 23:22:14 -- accel/accel.sh@20 -- # read -r var val 00:08:54.847 23:22:15 -- accel/accel.sh@21 -- # val= 00:08:54.847 23:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.847 23:22:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.847 23:22:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.847 23:22:15 -- accel/accel.sh@21 -- # val= 00:08:54.847 23:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.847 23:22:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.847 23:22:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.847 23:22:15 -- accel/accel.sh@21 -- # val= 00:08:54.847 23:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.847 23:22:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.847 23:22:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.847 23:22:15 -- accel/accel.sh@21 -- # val= 00:08:54.847 23:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.847 23:22:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.847 23:22:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.847 23:22:15 -- accel/accel.sh@21 -- # val= 00:08:54.847 23:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.847 23:22:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.847 23:22:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.847 23:22:15 -- accel/accel.sh@21 -- # val= 00:08:54.847 23:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.847 23:22:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.847 23:22:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.847 23:22:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:54.847 23:22:15 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:08:54.847 23:22:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:54.847 00:08:54.847 real 0m2.886s 00:08:54.847 user 0m2.551s 00:08:54.847 sys 0m0.327s 00:08:54.847 23:22:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.847 23:22:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.847 ************************************ 00:08:54.847 END TEST accel_comp 00:08:54.847 ************************************ 00:08:54.847 23:22:15 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:54.847 23:22:15 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:54.847 23:22:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:54.847 23:22:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.847 ************************************ 00:08:54.847 START TEST accel_decomp 00:08:54.847 ************************************ 00:08:54.847 23:22:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:54.847 23:22:15 -- accel/accel.sh@16 -- # local accel_opc 00:08:54.847 23:22:15 -- accel/accel.sh@17 -- # local accel_module 00:08:54.847 23:22:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:54.847 23:22:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:54.847 23:22:15 -- accel/accel.sh@12 -- # build_accel_config 00:08:54.847 23:22:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:54.847 23:22:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:54.847 23:22:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:54.847 23:22:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:54.847 23:22:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:54.847 23:22:15 -- accel/accel.sh@41 -- # local IFS=, 00:08:54.847 23:22:15 -- accel/accel.sh@42 -- # jq -r . 00:08:54.847 [2024-07-11 23:22:15.576963] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:54.847 [2024-07-11 23:22:15.577052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146583 ] 00:08:54.847 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.847 [2024-07-11 23:22:15.643900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.847 [2024-07-11 23:22:15.736298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.224 23:22:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:56.224 00:08:56.224 SPDK Configuration: 00:08:56.224 Core mask: 0x1 00:08:56.224 00:08:56.224 Accel Perf Configuration: 00:08:56.224 Workload Type: decompress 00:08:56.224 Transfer size: 4096 bytes 00:08:56.224 Vector count 1 00:08:56.224 Module: software 00:08:56.224 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:56.224 Queue depth: 32 00:08:56.224 Allocate depth: 32 00:08:56.224 # threads/core: 1 00:08:56.224 Run time: 1 seconds 00:08:56.224 Verify: Yes 00:08:56.224 00:08:56.224 Running for 1 seconds... 00:08:56.224 00:08:56.224 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:56.224 ------------------------------------------------------------------------------------ 00:08:56.224 0,0 55616/s 102 MiB/s 0 0 00:08:56.224 ==================================================================================== 00:08:56.224 Total 55616/s 217 MiB/s 0 0' 00:08:56.224 23:22:16 -- accel/accel.sh@20 -- # IFS=: 00:08:56.225 23:22:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:56.225 23:22:16 -- accel/accel.sh@20 -- # read -r var val 00:08:56.225 23:22:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:56.225 23:22:16 -- accel/accel.sh@12 -- # build_accel_config 00:08:56.225 23:22:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:56.225 23:22:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:56.225 23:22:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:56.225 23:22:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:56.225 23:22:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:56.225 23:22:16 -- accel/accel.sh@41 -- # local IFS=, 00:08:56.225 23:22:16 -- accel/accel.sh@42 -- # jq -r . 00:08:56.225 [2024-07-11 23:22:17.009905] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:56.225 [2024-07-11 23:22:17.010073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146753 ] 00:08:56.225 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.225 [2024-07-11 23:22:17.102967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.483 [2024-07-11 23:22:17.196910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.483 23:22:17 -- accel/accel.sh@21 -- # val= 00:08:56.483 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.483 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.483 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.483 23:22:17 -- accel/accel.sh@21 -- # val= 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val= 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val=0x1 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val= 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val= 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val=decompress 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val= 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val=software 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@23 -- # accel_module=software 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val=32 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val=32 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val=1 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val=Yes 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val= 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:56.484 23:22:17 -- accel/accel.sh@21 -- # val= 00:08:56.484 23:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # IFS=: 00:08:56.484 23:22:17 -- accel/accel.sh@20 -- # read -r var val 00:08:57.860 23:22:18 -- accel/accel.sh@21 -- # val= 00:08:57.860 23:22:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.860 23:22:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.860 23:22:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.860 23:22:18 -- accel/accel.sh@21 -- # val= 00:08:57.860 23:22:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.860 23:22:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.860 23:22:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.860 23:22:18 -- accel/accel.sh@21 -- # val= 00:08:57.860 23:22:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.860 23:22:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.860 23:22:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.860 23:22:18 -- accel/accel.sh@21 -- # val= 00:08:57.860 23:22:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.860 23:22:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.860 23:22:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.860 23:22:18 -- accel/accel.sh@21 -- # val= 00:08:57.860 23:22:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.860 23:22:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.860 23:22:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.860 23:22:18 -- accel/accel.sh@21 -- # val= 00:08:57.860 23:22:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.860 23:22:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.860 23:22:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.860 23:22:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:57.860 23:22:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:57.860 23:22:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:57.860 00:08:57.860 real 0m2.886s 00:08:57.860 user 0m2.554s 00:08:57.860 sys 0m0.325s 00:08:57.860 23:22:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.860 23:22:18 -- common/autotest_common.sh@10 -- # set +x 00:08:57.860 ************************************ 00:08:57.860 END TEST accel_decomp 00:08:57.860 ************************************ 00:08:57.860 23:22:18 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:57.860 23:22:18 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:57.860 23:22:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.860 23:22:18 -- common/autotest_common.sh@10 -- # set +x 00:08:57.860 ************************************ 00:08:57.860 START TEST accel_decmop_full 00:08:57.860 ************************************ 00:08:57.860 23:22:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:57.860 23:22:18 -- accel/accel.sh@16 -- # local accel_opc 00:08:57.860 23:22:18 -- accel/accel.sh@17 -- # local accel_module 00:08:57.860 23:22:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:57.860 23:22:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:57.860 23:22:18 -- accel/accel.sh@12 -- # build_accel_config 00:08:57.860 23:22:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:57.860 23:22:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:57.860 23:22:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:57.860 23:22:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:57.860 23:22:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:57.860 23:22:18 -- accel/accel.sh@41 -- # local IFS=, 00:08:57.860 23:22:18 -- accel/accel.sh@42 -- # jq -r . 00:08:57.860 [2024-07-11 23:22:18.498671] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:57.861 [2024-07-11 23:22:18.498764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146910 ] 00:08:57.861 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.861 [2024-07-11 23:22:18.565701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.861 [2024-07-11 23:22:18.658934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.234 23:22:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:59.234 00:08:59.234 SPDK Configuration: 00:08:59.234 Core mask: 0x1 00:08:59.234 00:08:59.234 Accel Perf Configuration: 00:08:59.234 Workload Type: decompress 00:08:59.234 Transfer size: 111250 bytes 00:08:59.234 Vector count 1 00:08:59.234 Module: software 00:08:59.234 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:59.234 Queue depth: 32 00:08:59.234 Allocate depth: 32 00:08:59.234 # threads/core: 1 00:08:59.234 Run time: 1 seconds 00:08:59.234 Verify: Yes 00:08:59.234 00:08:59.234 Running for 1 seconds... 00:08:59.234 00:08:59.234 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:59.234 ------------------------------------------------------------------------------------ 00:08:59.234 0,0 3808/s 157 MiB/s 0 0 00:08:59.234 ==================================================================================== 00:08:59.234 Total 3808/s 404 MiB/s 0 0' 00:08:59.234 23:22:19 -- accel/accel.sh@20 -- # IFS=: 00:08:59.234 23:22:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:59.234 23:22:19 -- accel/accel.sh@20 -- # read -r var val 00:08:59.234 23:22:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:59.234 23:22:19 -- accel/accel.sh@12 -- # build_accel_config 00:08:59.234 23:22:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:59.234 23:22:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:59.234 23:22:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:59.234 23:22:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:59.234 23:22:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:59.235 23:22:19 -- accel/accel.sh@41 -- # local IFS=, 00:08:59.235 23:22:19 -- accel/accel.sh@42 -- # jq -r . 00:08:59.235 [2024-07-11 23:22:19.948840] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:59.235 [2024-07-11 23:22:19.949007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147054 ] 00:08:59.235 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.235 [2024-07-11 23:22:20.049242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.235 [2024-07-11 23:22:20.142554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val= 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val= 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val= 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val=0x1 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val= 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val= 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val=decompress 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val= 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val=software 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@23 -- # accel_module=software 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val=32 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val=32 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val=1 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val=Yes 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val= 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.493 23:22:20 -- accel/accel.sh@21 -- # val= 00:08:59.493 23:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.493 23:22:20 -- accel/accel.sh@20 -- # read -r var val 00:09:00.869 23:22:21 -- accel/accel.sh@21 -- # val= 00:09:00.869 23:22:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.869 23:22:21 -- accel/accel.sh@20 -- # IFS=: 00:09:00.869 23:22:21 -- accel/accel.sh@20 -- # read -r var val 00:09:00.869 23:22:21 -- accel/accel.sh@21 -- # val= 00:09:00.869 23:22:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.869 23:22:21 -- accel/accel.sh@20 -- # IFS=: 00:09:00.869 23:22:21 -- accel/accel.sh@20 -- # read -r var val 00:09:00.869 23:22:21 -- accel/accel.sh@21 -- # val= 00:09:00.869 23:22:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.869 23:22:21 -- accel/accel.sh@20 -- # IFS=: 00:09:00.869 23:22:21 -- accel/accel.sh@20 -- # read -r var val 00:09:00.869 23:22:21 -- accel/accel.sh@21 -- # val= 00:09:00.869 23:22:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.869 23:22:21 -- accel/accel.sh@20 -- # IFS=: 00:09:00.869 23:22:21 -- accel/accel.sh@20 -- # read -r var val 00:09:00.869 23:22:21 -- accel/accel.sh@21 -- # val= 00:09:00.869 23:22:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.869 23:22:21 -- accel/accel.sh@20 -- # IFS=: 00:09:00.869 23:22:21 -- accel/accel.sh@20 -- # read -r var val 00:09:00.869 23:22:21 -- accel/accel.sh@21 -- # val= 00:09:00.869 23:22:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.869 23:22:21 -- accel/accel.sh@20 -- # IFS=: 00:09:00.869 23:22:21 -- accel/accel.sh@20 -- # read -r var val 00:09:00.869 23:22:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:00.869 23:22:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:00.869 23:22:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:00.869 00:09:00.869 real 0m2.918s 00:09:00.869 user 0m2.580s 00:09:00.869 sys 0m0.329s 00:09:00.869 23:22:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.869 23:22:21 -- common/autotest_common.sh@10 -- # set +x 00:09:00.869 ************************************ 00:09:00.869 END TEST accel_decmop_full 00:09:00.869 ************************************ 00:09:00.869 23:22:21 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:00.869 23:22:21 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:00.869 23:22:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:00.869 23:22:21 -- common/autotest_common.sh@10 -- # set +x 00:09:00.869 ************************************ 00:09:00.869 START TEST accel_decomp_mcore 00:09:00.869 ************************************ 00:09:00.869 23:22:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:00.869 23:22:21 -- accel/accel.sh@16 -- # local accel_opc 00:09:00.869 23:22:21 -- accel/accel.sh@17 -- # local accel_module 00:09:00.869 23:22:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:00.869 23:22:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:00.869 23:22:21 -- accel/accel.sh@12 -- # build_accel_config 00:09:00.869 23:22:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:00.869 23:22:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:00.869 23:22:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:00.869 23:22:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:00.869 23:22:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:00.869 23:22:21 -- accel/accel.sh@41 -- # local IFS=, 00:09:00.869 23:22:21 -- accel/accel.sh@42 -- # jq -r . 00:09:00.869 [2024-07-11 23:22:21.457218] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:00.869 [2024-07-11 23:22:21.457307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147329 ] 00:09:00.869 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.869 [2024-07-11 23:22:21.550735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.869 [2024-07-11 23:22:21.647623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.869 [2024-07-11 23:22:21.647652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.869 [2024-07-11 23:22:21.647679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.869 [2024-07-11 23:22:21.647682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.246 23:22:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:02.246 00:09:02.246 SPDK Configuration: 00:09:02.246 Core mask: 0xf 00:09:02.246 00:09:02.246 Accel Perf Configuration: 00:09:02.246 Workload Type: decompress 00:09:02.246 Transfer size: 4096 bytes 00:09:02.246 Vector count 1 00:09:02.246 Module: software 00:09:02.246 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:02.246 Queue depth: 32 00:09:02.246 Allocate depth: 32 00:09:02.246 # threads/core: 1 00:09:02.246 Run time: 1 seconds 00:09:02.246 Verify: Yes 00:09:02.246 00:09:02.246 Running for 1 seconds... 00:09:02.246 00:09:02.246 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:02.246 ------------------------------------------------------------------------------------ 00:09:02.246 0,0 53248/s 98 MiB/s 0 0 00:09:02.246 3,0 53952/s 99 MiB/s 0 0 00:09:02.246 2,0 53952/s 99 MiB/s 0 0 00:09:02.246 1,0 53952/s 99 MiB/s 0 0 00:09:02.246 ==================================================================================== 00:09:02.246 Total 215104/s 840 MiB/s 0 0' 00:09:02.246 23:22:22 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:02.246 23:22:22 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:02.246 23:22:22 -- accel/accel.sh@12 -- # build_accel_config 00:09:02.246 23:22:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:02.246 23:22:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:02.246 23:22:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:02.246 23:22:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:02.246 23:22:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:02.246 23:22:22 -- accel/accel.sh@41 -- # local IFS=, 00:09:02.246 23:22:22 -- accel/accel.sh@42 -- # jq -r . 00:09:02.246 [2024-07-11 23:22:22.920608] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:02.246 [2024-07-11 23:22:22.920778] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147480 ] 00:09:02.246 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.246 [2024-07-11 23:22:23.014953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.246 [2024-07-11 23:22:23.111357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.246 [2024-07-11 23:22:23.111415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.246 [2024-07-11 23:22:23.111468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.246 [2024-07-11 23:22:23.111471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val= 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val= 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val= 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val=0xf 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val= 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val= 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val=decompress 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val= 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val=software 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@23 -- # accel_module=software 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val=32 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val=32 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val=1 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val=Yes 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val= 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.246 23:22:23 -- accel/accel.sh@21 -- # val= 00:09:02.246 23:22:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.246 23:22:23 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:22:24 -- accel/accel.sh@21 -- # val= 00:09:03.625 23:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:22:24 -- accel/accel.sh@21 -- # val= 00:09:03.625 23:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:22:24 -- accel/accel.sh@21 -- # val= 00:09:03.625 23:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:22:24 -- accel/accel.sh@21 -- # val= 00:09:03.625 23:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:22:24 -- accel/accel.sh@21 -- # val= 00:09:03.625 23:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:22:24 -- accel/accel.sh@21 -- # val= 00:09:03.625 23:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:22:24 -- accel/accel.sh@21 -- # val= 00:09:03.625 23:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:22:24 -- accel/accel.sh@21 -- # val= 00:09:03.625 23:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:22:24 -- accel/accel.sh@21 -- # val= 00:09:03.625 23:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:22:24 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:22:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:03.625 23:22:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:03.625 23:22:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:03.625 00:09:03.625 real 0m2.928s 00:09:03.625 user 0m9.494s 00:09:03.625 sys 0m0.366s 00:09:03.625 23:22:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.625 23:22:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.625 ************************************ 00:09:03.625 END TEST accel_decomp_mcore 00:09:03.625 ************************************ 00:09:03.625 23:22:24 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:03.625 23:22:24 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:03.625 23:22:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.625 23:22:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.625 ************************************ 00:09:03.625 START TEST accel_decomp_full_mcore 00:09:03.625 ************************************ 00:09:03.625 23:22:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:03.625 23:22:24 -- accel/accel.sh@16 -- # local accel_opc 00:09:03.625 23:22:24 -- accel/accel.sh@17 -- # local accel_module 00:09:03.625 23:22:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:03.625 23:22:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:03.625 23:22:24 -- accel/accel.sh@12 -- # build_accel_config 00:09:03.625 23:22:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:03.625 23:22:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:03.625 23:22:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:03.625 23:22:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:03.625 23:22:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:03.625 23:22:24 -- accel/accel.sh@41 -- # local IFS=, 00:09:03.625 23:22:24 -- accel/accel.sh@42 -- # jq -r . 00:09:03.625 [2024-07-11 23:22:24.417117] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:03.625 [2024-07-11 23:22:24.417249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147639 ] 00:09:03.625 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.625 [2024-07-11 23:22:24.501684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.884 [2024-07-11 23:22:24.595663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.884 [2024-07-11 23:22:24.595720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.884 [2024-07-11 23:22:24.595770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.884 [2024-07-11 23:22:24.595773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.332 23:22:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:05.332 00:09:05.332 SPDK Configuration: 00:09:05.332 Core mask: 0xf 00:09:05.332 00:09:05.332 Accel Perf Configuration: 00:09:05.332 Workload Type: decompress 00:09:05.332 Transfer size: 111250 bytes 00:09:05.332 Vector count 1 00:09:05.332 Module: software 00:09:05.332 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:05.332 Queue depth: 32 00:09:05.332 Allocate depth: 32 00:09:05.332 # threads/core: 1 00:09:05.332 Run time: 1 seconds 00:09:05.333 Verify: Yes 00:09:05.333 00:09:05.333 Running for 1 seconds... 00:09:05.333 00:09:05.333 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:05.333 ------------------------------------------------------------------------------------ 00:09:05.333 0,0 3776/s 155 MiB/s 0 0 00:09:05.333 3,0 3776/s 155 MiB/s 0 0 00:09:05.333 2,0 3776/s 155 MiB/s 0 0 00:09:05.333 1,0 3776/s 155 MiB/s 0 0 00:09:05.333 ==================================================================================== 00:09:05.333 Total 15104/s 1602 MiB/s 0 0' 00:09:05.333 23:22:25 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:05.333 23:22:25 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:05.333 23:22:25 -- accel/accel.sh@12 -- # build_accel_config 00:09:05.333 23:22:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:05.333 23:22:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:05.333 23:22:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:05.333 23:22:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:05.333 23:22:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:05.333 23:22:25 -- accel/accel.sh@41 -- # local IFS=, 00:09:05.333 23:22:25 -- accel/accel.sh@42 -- # jq -r . 00:09:05.333 [2024-07-11 23:22:25.881565] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:05.333 [2024-07-11 23:22:25.881734] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147789 ] 00:09:05.333 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.333 [2024-07-11 23:22:25.976574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.333 [2024-07-11 23:22:26.073068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.333 [2024-07-11 23:22:26.073326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.333 [2024-07-11 23:22:26.073351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.333 [2024-07-11 23:22:26.073354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val= 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val= 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val= 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val=0xf 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val= 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val= 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val=decompress 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val= 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val=software 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@23 -- # accel_module=software 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val=32 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val=32 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val=1 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val=Yes 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val= 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:05.333 23:22:26 -- accel/accel.sh@21 -- # val= 00:09:05.333 23:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # IFS=: 00:09:05.333 23:22:26 -- accel/accel.sh@20 -- # read -r var val 00:09:06.707 23:22:27 -- accel/accel.sh@21 -- # val= 00:09:06.707 23:22:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.707 23:22:27 -- accel/accel.sh@20 -- # IFS=: 00:09:06.707 23:22:27 -- accel/accel.sh@20 -- # read -r var val 00:09:06.707 23:22:27 -- accel/accel.sh@21 -- # val= 00:09:06.707 23:22:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.707 23:22:27 -- accel/accel.sh@20 -- # IFS=: 00:09:06.707 23:22:27 -- accel/accel.sh@20 -- # read -r var val 00:09:06.707 23:22:27 -- accel/accel.sh@21 -- # val= 00:09:06.707 23:22:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.707 23:22:27 -- accel/accel.sh@20 -- # IFS=: 00:09:06.707 23:22:27 -- accel/accel.sh@20 -- # read -r var val 00:09:06.707 23:22:27 -- accel/accel.sh@21 -- # val= 00:09:06.707 23:22:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.707 23:22:27 -- accel/accel.sh@20 -- # IFS=: 00:09:06.707 23:22:27 -- accel/accel.sh@20 -- # read -r var val 00:09:06.707 23:22:27 -- accel/accel.sh@21 -- # val= 00:09:06.707 23:22:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.707 23:22:27 -- accel/accel.sh@20 -- # IFS=: 00:09:06.707 23:22:27 -- accel/accel.sh@20 -- # read -r var val 00:09:06.707 23:22:27 -- accel/accel.sh@21 -- # val= 00:09:06.707 23:22:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.707 23:22:27 -- accel/accel.sh@20 -- # IFS=: 00:09:06.708 23:22:27 -- accel/accel.sh@20 -- # read -r var val 00:09:06.708 23:22:27 -- accel/accel.sh@21 -- # val= 00:09:06.708 23:22:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.708 23:22:27 -- accel/accel.sh@20 -- # IFS=: 00:09:06.708 23:22:27 -- accel/accel.sh@20 -- # read -r var val 00:09:06.708 23:22:27 -- accel/accel.sh@21 -- # val= 00:09:06.708 23:22:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.708 23:22:27 -- accel/accel.sh@20 -- # IFS=: 00:09:06.708 23:22:27 -- accel/accel.sh@20 -- # read -r var val 00:09:06.708 23:22:27 -- accel/accel.sh@21 -- # val= 00:09:06.708 23:22:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.708 23:22:27 -- accel/accel.sh@20 -- # IFS=: 00:09:06.708 23:22:27 -- accel/accel.sh@20 -- # read -r var val 00:09:06.708 23:22:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:06.708 23:22:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:06.708 23:22:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:06.708 00:09:06.708 real 0m2.939s 00:09:06.708 user 0m9.582s 00:09:06.708 sys 0m0.349s 00:09:06.708 23:22:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.708 23:22:27 -- common/autotest_common.sh@10 -- # set +x 00:09:06.708 ************************************ 00:09:06.708 END TEST accel_decomp_full_mcore 00:09:06.708 ************************************ 00:09:06.708 23:22:27 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:06.708 23:22:27 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:06.708 23:22:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.708 23:22:27 -- common/autotest_common.sh@10 -- # set +x 00:09:06.708 ************************************ 00:09:06.708 START TEST accel_decomp_mthread 00:09:06.708 ************************************ 00:09:06.708 23:22:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:06.708 23:22:27 -- accel/accel.sh@16 -- # local accel_opc 00:09:06.708 23:22:27 -- accel/accel.sh@17 -- # local accel_module 00:09:06.708 23:22:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:06.708 23:22:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:06.708 23:22:27 -- accel/accel.sh@12 -- # build_accel_config 00:09:06.708 23:22:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:06.708 23:22:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:06.708 23:22:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:06.708 23:22:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:06.708 23:22:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:06.708 23:22:27 -- accel/accel.sh@41 -- # local IFS=, 00:09:06.708 23:22:27 -- accel/accel.sh@42 -- # jq -r . 00:09:06.708 [2024-07-11 23:22:27.392333] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:06.708 [2024-07-11 23:22:27.392446] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148066 ] 00:09:06.708 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.708 [2024-07-11 23:22:27.474041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.708 [2024-07-11 23:22:27.567350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.085 23:22:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:08.085 00:09:08.085 SPDK Configuration: 00:09:08.085 Core mask: 0x1 00:09:08.085 00:09:08.085 Accel Perf Configuration: 00:09:08.085 Workload Type: decompress 00:09:08.085 Transfer size: 4096 bytes 00:09:08.085 Vector count 1 00:09:08.085 Module: software 00:09:08.085 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:08.085 Queue depth: 32 00:09:08.085 Allocate depth: 32 00:09:08.085 # threads/core: 2 00:09:08.085 Run time: 1 seconds 00:09:08.085 Verify: Yes 00:09:08.085 00:09:08.085 Running for 1 seconds... 00:09:08.085 00:09:08.085 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:08.085 ------------------------------------------------------------------------------------ 00:09:08.085 0,1 28096/s 51 MiB/s 0 0 00:09:08.085 0,0 28000/s 51 MiB/s 0 0 00:09:08.085 ==================================================================================== 00:09:08.085 Total 56096/s 219 MiB/s 0 0' 00:09:08.085 23:22:28 -- accel/accel.sh@20 -- # IFS=: 00:09:08.085 23:22:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:08.085 23:22:28 -- accel/accel.sh@20 -- # read -r var val 00:09:08.085 23:22:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:08.085 23:22:28 -- accel/accel.sh@12 -- # build_accel_config 00:09:08.085 23:22:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:08.085 23:22:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:08.085 23:22:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:08.085 23:22:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:08.085 23:22:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:08.085 23:22:28 -- accel/accel.sh@41 -- # local IFS=, 00:09:08.085 23:22:28 -- accel/accel.sh@42 -- # jq -r . 00:09:08.085 [2024-07-11 23:22:28.846815] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:08.085 [2024-07-11 23:22:28.846994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148212 ] 00:09:08.085 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.085 [2024-07-11 23:22:28.942434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.344 [2024-07-11 23:22:29.036242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val= 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val= 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val= 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val=0x1 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val= 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val= 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val=decompress 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val= 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val=software 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@23 -- # accel_module=software 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val=32 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val=32 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val=2 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val=Yes 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val= 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:08.344 23:22:29 -- accel/accel.sh@21 -- # val= 00:09:08.344 23:22:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # IFS=: 00:09:08.344 23:22:29 -- accel/accel.sh@20 -- # read -r var val 00:09:09.721 23:22:30 -- accel/accel.sh@21 -- # val= 00:09:09.721 23:22:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.721 23:22:30 -- accel/accel.sh@20 -- # IFS=: 00:09:09.721 23:22:30 -- accel/accel.sh@20 -- # read -r var val 00:09:09.721 23:22:30 -- accel/accel.sh@21 -- # val= 00:09:09.721 23:22:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.721 23:22:30 -- accel/accel.sh@20 -- # IFS=: 00:09:09.722 23:22:30 -- accel/accel.sh@20 -- # read -r var val 00:09:09.722 23:22:30 -- accel/accel.sh@21 -- # val= 00:09:09.722 23:22:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.722 23:22:30 -- accel/accel.sh@20 -- # IFS=: 00:09:09.722 23:22:30 -- accel/accel.sh@20 -- # read -r var val 00:09:09.722 23:22:30 -- accel/accel.sh@21 -- # val= 00:09:09.722 23:22:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.722 23:22:30 -- accel/accel.sh@20 -- # IFS=: 00:09:09.722 23:22:30 -- accel/accel.sh@20 -- # read -r var val 00:09:09.722 23:22:30 -- accel/accel.sh@21 -- # val= 00:09:09.722 23:22:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.722 23:22:30 -- accel/accel.sh@20 -- # IFS=: 00:09:09.722 23:22:30 -- accel/accel.sh@20 -- # read -r var val 00:09:09.722 23:22:30 -- accel/accel.sh@21 -- # val= 00:09:09.722 23:22:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.722 23:22:30 -- accel/accel.sh@20 -- # IFS=: 00:09:09.722 23:22:30 -- accel/accel.sh@20 -- # read -r var val 00:09:09.722 23:22:30 -- accel/accel.sh@21 -- # val= 00:09:09.722 23:22:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.722 23:22:30 -- accel/accel.sh@20 -- # IFS=: 00:09:09.722 23:22:30 -- accel/accel.sh@20 -- # read -r var val 00:09:09.722 23:22:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:09.722 23:22:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:09.722 23:22:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:09.722 00:09:09.722 real 0m2.918s 00:09:09.722 user 0m2.573s 00:09:09.722 sys 0m0.336s 00:09:09.722 23:22:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.722 23:22:30 -- common/autotest_common.sh@10 -- # set +x 00:09:09.722 ************************************ 00:09:09.722 END TEST accel_decomp_mthread 00:09:09.722 ************************************ 00:09:09.722 23:22:30 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:09.722 23:22:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:09.722 23:22:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.722 23:22:30 -- common/autotest_common.sh@10 -- # set +x 00:09:09.722 ************************************ 00:09:09.722 START TEST accel_deomp_full_mthread 00:09:09.722 ************************************ 00:09:09.722 23:22:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:09.722 23:22:30 -- accel/accel.sh@16 -- # local accel_opc 00:09:09.722 23:22:30 -- accel/accel.sh@17 -- # local accel_module 00:09:09.722 23:22:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:09.722 23:22:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:09.722 23:22:30 -- accel/accel.sh@12 -- # build_accel_config 00:09:09.722 23:22:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:09.722 23:22:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:09.722 23:22:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:09.722 23:22:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:09.722 23:22:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:09.722 23:22:30 -- accel/accel.sh@41 -- # local IFS=, 00:09:09.722 23:22:30 -- accel/accel.sh@42 -- # jq -r . 00:09:09.722 [2024-07-11 23:22:30.354795] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:09.722 [2024-07-11 23:22:30.354957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148370 ] 00:09:09.722 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.722 [2024-07-11 23:22:30.449184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.722 [2024-07-11 23:22:30.542148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.096 23:22:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:11.096 00:09:11.096 SPDK Configuration: 00:09:11.096 Core mask: 0x1 00:09:11.096 00:09:11.096 Accel Perf Configuration: 00:09:11.096 Workload Type: decompress 00:09:11.096 Transfer size: 111250 bytes 00:09:11.096 Vector count 1 00:09:11.096 Module: software 00:09:11.096 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:11.096 Queue depth: 32 00:09:11.096 Allocate depth: 32 00:09:11.096 # threads/core: 2 00:09:11.096 Run time: 1 seconds 00:09:11.096 Verify: Yes 00:09:11.096 00:09:11.096 Running for 1 seconds... 00:09:11.096 00:09:11.096 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:11.096 ------------------------------------------------------------------------------------ 00:09:11.096 0,1 1952/s 80 MiB/s 0 0 00:09:11.096 0,0 1920/s 79 MiB/s 0 0 00:09:11.096 ==================================================================================== 00:09:11.096 Total 3872/s 410 MiB/s 0 0' 00:09:11.096 23:22:31 -- accel/accel.sh@20 -- # IFS=: 00:09:11.096 23:22:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:11.096 23:22:31 -- accel/accel.sh@20 -- # read -r var val 00:09:11.096 23:22:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:11.096 23:22:31 -- accel/accel.sh@12 -- # build_accel_config 00:09:11.096 23:22:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:11.096 23:22:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:11.096 23:22:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:11.096 23:22:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:11.096 23:22:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:11.096 23:22:31 -- accel/accel.sh@41 -- # local IFS=, 00:09:11.096 23:22:31 -- accel/accel.sh@42 -- # jq -r . 00:09:11.096 [2024-07-11 23:22:31.837320] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:11.096 [2024-07-11 23:22:31.837417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148627 ] 00:09:11.096 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.096 [2024-07-11 23:22:31.903983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.096 [2024-07-11 23:22:31.997712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.353 23:22:32 -- accel/accel.sh@21 -- # val= 00:09:11.353 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.353 23:22:32 -- accel/accel.sh@21 -- # val= 00:09:11.353 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.353 23:22:32 -- accel/accel.sh@21 -- # val= 00:09:11.353 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.353 23:22:32 -- accel/accel.sh@21 -- # val=0x1 00:09:11.353 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.353 23:22:32 -- accel/accel.sh@21 -- # val= 00:09:11.353 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.353 23:22:32 -- accel/accel.sh@21 -- # val= 00:09:11.353 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.353 23:22:32 -- accel/accel.sh@21 -- # val=decompress 00:09:11.353 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.353 23:22:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.353 23:22:32 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:11.353 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.353 23:22:32 -- accel/accel.sh@21 -- # val= 00:09:11.353 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.353 23:22:32 -- accel/accel.sh@21 -- # val=software 00:09:11.353 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.353 23:22:32 -- accel/accel.sh@23 -- # accel_module=software 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.353 23:22:32 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:11.353 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.353 23:22:32 -- accel/accel.sh@21 -- # val=32 00:09:11.353 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.353 23:22:32 -- accel/accel.sh@21 -- # val=32 00:09:11.353 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.353 23:22:32 -- accel/accel.sh@21 -- # val=2 00:09:11.353 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.353 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.354 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.354 23:22:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:11.354 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.354 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.354 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.354 23:22:32 -- accel/accel.sh@21 -- # val=Yes 00:09:11.354 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.354 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.354 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.354 23:22:32 -- accel/accel.sh@21 -- # val= 00:09:11.354 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.354 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.354 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:11.354 23:22:32 -- accel/accel.sh@21 -- # val= 00:09:11.354 23:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.354 23:22:32 -- accel/accel.sh@20 -- # IFS=: 00:09:11.354 23:22:32 -- accel/accel.sh@20 -- # read -r var val 00:09:12.724 23:22:33 -- accel/accel.sh@21 -- # val= 00:09:12.724 23:22:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.724 23:22:33 -- accel/accel.sh@20 -- # IFS=: 00:09:12.724 23:22:33 -- accel/accel.sh@20 -- # read -r var val 00:09:12.724 23:22:33 -- accel/accel.sh@21 -- # val= 00:09:12.724 23:22:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.724 23:22:33 -- accel/accel.sh@20 -- # IFS=: 00:09:12.724 23:22:33 -- accel/accel.sh@20 -- # read -r var val 00:09:12.724 23:22:33 -- accel/accel.sh@21 -- # val= 00:09:12.724 23:22:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.724 23:22:33 -- accel/accel.sh@20 -- # IFS=: 00:09:12.724 23:22:33 -- accel/accel.sh@20 -- # read -r var val 00:09:12.724 23:22:33 -- accel/accel.sh@21 -- # val= 00:09:12.724 23:22:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.724 23:22:33 -- accel/accel.sh@20 -- # IFS=: 00:09:12.724 23:22:33 -- accel/accel.sh@20 -- # read -r var val 00:09:12.724 23:22:33 -- accel/accel.sh@21 -- # val= 00:09:12.724 23:22:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.724 23:22:33 -- accel/accel.sh@20 -- # IFS=: 00:09:12.724 23:22:33 -- accel/accel.sh@20 -- # read -r var val 00:09:12.724 23:22:33 -- accel/accel.sh@21 -- # val= 00:09:12.724 23:22:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.724 23:22:33 -- accel/accel.sh@20 -- # IFS=: 00:09:12.724 23:22:33 -- accel/accel.sh@20 -- # read -r var val 00:09:12.724 23:22:33 -- accel/accel.sh@21 -- # val= 00:09:12.724 23:22:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.724 23:22:33 -- accel/accel.sh@20 -- # IFS=: 00:09:12.724 23:22:33 -- accel/accel.sh@20 -- # read -r var val 00:09:12.724 23:22:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:12.724 23:22:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:12.724 23:22:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:12.724 00:09:12.724 real 0m2.956s 00:09:12.724 user 0m2.622s 00:09:12.724 sys 0m0.325s 00:09:12.724 23:22:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.724 23:22:33 -- common/autotest_common.sh@10 -- # set +x 00:09:12.724 ************************************ 00:09:12.724 END TEST accel_deomp_full_mthread 00:09:12.724 ************************************ 00:09:12.724 23:22:33 -- accel/accel.sh@116 -- # [[ n == y ]] 00:09:12.724 23:22:33 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:12.724 23:22:33 -- accel/accel.sh@129 -- # build_accel_config 00:09:12.724 23:22:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:12.724 23:22:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:12.724 23:22:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:12.724 23:22:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:12.724 23:22:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:12.724 23:22:33 -- common/autotest_common.sh@10 -- # set +x 00:09:12.724 23:22:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:12.724 23:22:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:12.724 23:22:33 -- accel/accel.sh@41 -- # local IFS=, 00:09:12.724 23:22:33 -- accel/accel.sh@42 -- # jq -r . 00:09:12.724 ************************************ 00:09:12.724 START TEST accel_dif_functional_tests 00:09:12.724 ************************************ 00:09:12.724 23:22:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:12.724 [2024-07-11 23:22:33.364905] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:12.724 [2024-07-11 23:22:33.365018] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148789 ] 00:09:12.724 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.724 [2024-07-11 23:22:33.441065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:12.724 [2024-07-11 23:22:33.537064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.724 [2024-07-11 23:22:33.537150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.724 [2024-07-11 23:22:33.537149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.724 00:09:12.724 00:09:12.724 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.724 http://cunit.sourceforge.net/ 00:09:12.724 00:09:12.724 00:09:12.724 Suite: accel_dif 00:09:12.724 Test: verify: DIF generated, GUARD check ...passed 00:09:12.724 Test: verify: DIF generated, APPTAG check ...passed 00:09:12.724 Test: verify: DIF generated, REFTAG check ...passed 00:09:12.724 Test: verify: DIF not generated, GUARD check ...[2024-07-11 23:22:33.637639] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:12.724 [2024-07-11 23:22:33.637709] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:12.724 passed 00:09:12.724 Test: verify: DIF not generated, APPTAG check ...[2024-07-11 23:22:33.637751] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:12.724 [2024-07-11 23:22:33.637781] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:12.724 passed 00:09:12.724 Test: verify: DIF not generated, REFTAG check ...[2024-07-11 23:22:33.637817] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:12.724 [2024-07-11 23:22:33.637852] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:12.724 passed 00:09:12.724 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:12.724 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-11 23:22:33.637919] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:12.724 passed 00:09:12.724 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:12.724 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:12.724 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:12.725 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-11 23:22:33.638083] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:12.725 passed 00:09:12.725 Test: generate copy: DIF generated, GUARD check ...passed 00:09:12.725 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:12.725 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:12.725 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:12.725 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:12.725 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:12.725 Test: generate copy: iovecs-len validate ...[2024-07-11 23:22:33.638360] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:12.725 passed 00:09:12.725 Test: generate copy: buffer alignment validate ...passed 00:09:12.725 00:09:12.725 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.725 suites 1 1 n/a 0 0 00:09:12.725 tests 20 20 20 0 0 00:09:12.725 asserts 204 204 204 0 n/a 00:09:12.725 00:09:12.725 Elapsed time = 0.003 seconds 00:09:12.983 00:09:12.983 real 0m0.546s 00:09:12.983 user 0m0.849s 00:09:12.983 sys 0m0.197s 00:09:12.983 23:22:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.983 23:22:33 -- common/autotest_common.sh@10 -- # set +x 00:09:12.983 ************************************ 00:09:12.983 END TEST accel_dif_functional_tests 00:09:12.983 ************************************ 00:09:12.983 00:09:12.983 real 1m2.154s 00:09:12.983 user 1m9.063s 00:09:12.983 sys 0m8.414s 00:09:12.983 23:22:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.983 23:22:33 -- common/autotest_common.sh@10 -- # set +x 00:09:12.983 ************************************ 00:09:12.983 END TEST accel 00:09:12.983 ************************************ 00:09:12.983 23:22:33 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:12.983 23:22:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:12.983 23:22:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:12.983 23:22:33 -- common/autotest_common.sh@10 -- # set +x 00:09:12.983 ************************************ 00:09:12.983 START TEST accel_rpc 00:09:12.983 ************************************ 00:09:12.983 23:22:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:13.241 * Looking for test storage... 00:09:13.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:09:13.241 23:22:33 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:13.241 23:22:33 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=148863 00:09:13.241 23:22:33 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:13.241 23:22:33 -- accel/accel_rpc.sh@15 -- # waitforlisten 148863 00:09:13.241 23:22:33 -- common/autotest_common.sh@819 -- # '[' -z 148863 ']' 00:09:13.241 23:22:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.241 23:22:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:13.241 23:22:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.241 23:22:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:13.241 23:22:33 -- common/autotest_common.sh@10 -- # set +x 00:09:13.241 [2024-07-11 23:22:34.072911] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:13.241 [2024-07-11 23:22:34.073091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148863 ] 00:09:13.241 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.241 [2024-07-11 23:22:34.169991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.498 [2024-07-11 23:22:34.262341] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:13.498 [2024-07-11 23:22:34.262517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.498 23:22:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:13.498 23:22:34 -- common/autotest_common.sh@852 -- # return 0 00:09:13.498 23:22:34 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:13.498 23:22:34 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:13.498 23:22:34 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:13.498 23:22:34 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:13.498 23:22:34 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:13.498 23:22:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:13.498 23:22:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:13.498 23:22:34 -- common/autotest_common.sh@10 -- # set +x 00:09:13.498 ************************************ 00:09:13.498 START TEST accel_assign_opcode 00:09:13.498 ************************************ 00:09:13.498 23:22:34 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:09:13.498 23:22:34 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:13.498 23:22:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.498 23:22:34 -- common/autotest_common.sh@10 -- # set +x 00:09:13.498 [2024-07-11 23:22:34.343149] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:13.498 23:22:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.498 23:22:34 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:13.498 23:22:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.498 23:22:34 -- common/autotest_common.sh@10 -- # set +x 00:09:13.498 [2024-07-11 23:22:34.351164] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:13.498 23:22:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.498 23:22:34 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:13.498 23:22:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.498 23:22:34 -- common/autotest_common.sh@10 -- # set +x 00:09:13.756 23:22:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.756 23:22:34 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:13.756 23:22:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.756 23:22:34 -- common/autotest_common.sh@10 -- # set +x 00:09:13.756 23:22:34 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:13.756 23:22:34 -- accel/accel_rpc.sh@42 -- # grep software 00:09:13.756 23:22:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.756 software 00:09:13.756 00:09:13.756 real 0m0.341s 00:09:13.756 user 0m0.076s 00:09:13.756 sys 0m0.009s 00:09:13.756 23:22:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.756 23:22:34 -- common/autotest_common.sh@10 -- # set +x 00:09:13.756 ************************************ 00:09:13.756 END TEST accel_assign_opcode 00:09:13.756 ************************************ 00:09:13.756 23:22:34 -- accel/accel_rpc.sh@55 -- # killprocess 148863 00:09:13.756 23:22:34 -- common/autotest_common.sh@926 -- # '[' -z 148863 ']' 00:09:13.756 23:22:34 -- common/autotest_common.sh@930 -- # kill -0 148863 00:09:13.756 23:22:34 -- common/autotest_common.sh@931 -- # uname 00:09:13.756 23:22:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:13.756 23:22:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 148863 00:09:14.015 23:22:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:14.015 23:22:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:14.015 23:22:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 148863' 00:09:14.015 killing process with pid 148863 00:09:14.015 23:22:34 -- common/autotest_common.sh@945 -- # kill 148863 00:09:14.015 23:22:34 -- common/autotest_common.sh@950 -- # wait 148863 00:09:14.273 00:09:14.273 real 0m1.229s 00:09:14.273 user 0m1.227s 00:09:14.273 sys 0m0.469s 00:09:14.273 23:22:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.273 23:22:35 -- common/autotest_common.sh@10 -- # set +x 00:09:14.273 ************************************ 00:09:14.273 END TEST accel_rpc 00:09:14.273 ************************************ 00:09:14.273 23:22:35 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:14.273 23:22:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:14.273 23:22:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.273 23:22:35 -- common/autotest_common.sh@10 -- # set +x 00:09:14.273 ************************************ 00:09:14.273 START TEST app_cmdline 00:09:14.273 ************************************ 00:09:14.273 23:22:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:14.531 * Looking for test storage... 00:09:14.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:14.531 23:22:35 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:14.531 23:22:35 -- app/cmdline.sh@17 -- # spdk_tgt_pid=149141 00:09:14.531 23:22:35 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:14.531 23:22:35 -- app/cmdline.sh@18 -- # waitforlisten 149141 00:09:14.531 23:22:35 -- common/autotest_common.sh@819 -- # '[' -z 149141 ']' 00:09:14.531 23:22:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.531 23:22:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:14.531 23:22:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.531 23:22:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:14.531 23:22:35 -- common/autotest_common.sh@10 -- # set +x 00:09:14.531 [2024-07-11 23:22:35.338073] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:14.531 [2024-07-11 23:22:35.338251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149141 ] 00:09:14.531 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.531 [2024-07-11 23:22:35.434858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.789 [2024-07-11 23:22:35.529232] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:14.789 [2024-07-11 23:22:35.529418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.742 23:22:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:15.743 23:22:36 -- common/autotest_common.sh@852 -- # return 0 00:09:15.743 23:22:36 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:16.001 { 00:09:16.001 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:09:16.001 "fields": { 00:09:16.001 "major": 24, 00:09:16.001 "minor": 1, 00:09:16.001 "patch": 1, 00:09:16.001 "suffix": "-pre", 00:09:16.001 "commit": "4b94202c6" 00:09:16.001 } 00:09:16.001 } 00:09:16.001 23:22:36 -- app/cmdline.sh@22 -- # expected_methods=() 00:09:16.001 23:22:36 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:16.001 23:22:36 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:16.001 23:22:36 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:16.001 23:22:36 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:16.001 23:22:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:16.001 23:22:36 -- common/autotest_common.sh@10 -- # set +x 00:09:16.001 23:22:36 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:16.001 23:22:36 -- app/cmdline.sh@26 -- # sort 00:09:16.001 23:22:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:16.001 23:22:36 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:16.001 23:22:36 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:16.001 23:22:36 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:16.001 23:22:36 -- common/autotest_common.sh@640 -- # local es=0 00:09:16.001 23:22:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:16.001 23:22:36 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.001 23:22:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:16.001 23:22:36 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.001 23:22:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:16.001 23:22:36 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.001 23:22:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:16.001 23:22:36 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.001 23:22:36 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:16.001 23:22:36 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:16.259 request: 00:09:16.259 { 00:09:16.259 "method": "env_dpdk_get_mem_stats", 00:09:16.259 "req_id": 1 00:09:16.259 } 00:09:16.259 Got JSON-RPC error response 00:09:16.259 response: 00:09:16.259 { 00:09:16.259 "code": -32601, 00:09:16.259 "message": "Method not found" 00:09:16.259 } 00:09:16.259 23:22:37 -- common/autotest_common.sh@643 -- # es=1 00:09:16.259 23:22:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:16.259 23:22:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:16.259 23:22:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:16.259 23:22:37 -- app/cmdline.sh@1 -- # killprocess 149141 00:09:16.259 23:22:37 -- common/autotest_common.sh@926 -- # '[' -z 149141 ']' 00:09:16.259 23:22:37 -- common/autotest_common.sh@930 -- # kill -0 149141 00:09:16.259 23:22:37 -- common/autotest_common.sh@931 -- # uname 00:09:16.259 23:22:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:16.259 23:22:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 149141 00:09:16.516 23:22:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:16.516 23:22:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:16.516 23:22:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 149141' 00:09:16.516 killing process with pid 149141 00:09:16.516 23:22:37 -- common/autotest_common.sh@945 -- # kill 149141 00:09:16.516 23:22:37 -- common/autotest_common.sh@950 -- # wait 149141 00:09:16.774 00:09:16.774 real 0m2.467s 00:09:16.774 user 0m3.327s 00:09:16.774 sys 0m0.573s 00:09:16.774 23:22:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.774 23:22:37 -- common/autotest_common.sh@10 -- # set +x 00:09:16.774 ************************************ 00:09:16.774 END TEST app_cmdline 00:09:16.774 ************************************ 00:09:16.774 23:22:37 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:16.774 23:22:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:16.774 23:22:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:16.774 23:22:37 -- common/autotest_common.sh@10 -- # set +x 00:09:16.774 ************************************ 00:09:16.774 START TEST version 00:09:16.774 ************************************ 00:09:16.774 23:22:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:17.033 * Looking for test storage... 00:09:17.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:17.033 23:22:37 -- app/version.sh@17 -- # get_header_version major 00:09:17.033 23:22:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:17.033 23:22:37 -- app/version.sh@14 -- # cut -f2 00:09:17.033 23:22:37 -- app/version.sh@14 -- # tr -d '"' 00:09:17.033 23:22:37 -- app/version.sh@17 -- # major=24 00:09:17.033 23:22:37 -- app/version.sh@18 -- # get_header_version minor 00:09:17.033 23:22:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:17.033 23:22:37 -- app/version.sh@14 -- # cut -f2 00:09:17.033 23:22:37 -- app/version.sh@14 -- # tr -d '"' 00:09:17.033 23:22:37 -- app/version.sh@18 -- # minor=1 00:09:17.033 23:22:37 -- app/version.sh@19 -- # get_header_version patch 00:09:17.033 23:22:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:17.033 23:22:37 -- app/version.sh@14 -- # cut -f2 00:09:17.033 23:22:37 -- app/version.sh@14 -- # tr -d '"' 00:09:17.033 23:22:37 -- app/version.sh@19 -- # patch=1 00:09:17.033 23:22:37 -- app/version.sh@20 -- # get_header_version suffix 00:09:17.033 23:22:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:17.033 23:22:37 -- app/version.sh@14 -- # cut -f2 00:09:17.033 23:22:37 -- app/version.sh@14 -- # tr -d '"' 00:09:17.033 23:22:37 -- app/version.sh@20 -- # suffix=-pre 00:09:17.033 23:22:37 -- app/version.sh@22 -- # version=24.1 00:09:17.033 23:22:37 -- app/version.sh@25 -- # (( patch != 0 )) 00:09:17.033 23:22:37 -- app/version.sh@25 -- # version=24.1.1 00:09:17.033 23:22:37 -- app/version.sh@28 -- # version=24.1.1rc0 00:09:17.033 23:22:37 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:17.033 23:22:37 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:17.033 23:22:37 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:09:17.033 23:22:37 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:09:17.033 00:09:17.033 real 0m0.126s 00:09:17.033 user 0m0.077s 00:09:17.033 sys 0m0.077s 00:09:17.033 23:22:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.033 23:22:37 -- common/autotest_common.sh@10 -- # set +x 00:09:17.033 ************************************ 00:09:17.033 END TEST version 00:09:17.033 ************************************ 00:09:17.033 23:22:37 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:09:17.033 23:22:37 -- spdk/autotest.sh@204 -- # uname -s 00:09:17.033 23:22:37 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:09:17.033 23:22:37 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:09:17.033 23:22:37 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:09:17.033 23:22:37 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:09:17.033 23:22:37 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:09:17.033 23:22:37 -- spdk/autotest.sh@268 -- # timing_exit lib 00:09:17.033 23:22:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:17.033 23:22:37 -- common/autotest_common.sh@10 -- # set +x 00:09:17.033 23:22:37 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:09:17.033 23:22:37 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:09:17.033 23:22:37 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:09:17.033 23:22:37 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:09:17.033 23:22:37 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:09:17.033 23:22:37 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:09:17.033 23:22:37 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:17.033 23:22:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:17.033 23:22:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:17.033 23:22:37 -- common/autotest_common.sh@10 -- # set +x 00:09:17.033 ************************************ 00:09:17.033 START TEST nvmf_tcp 00:09:17.033 ************************************ 00:09:17.033 23:22:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:17.033 * Looking for test storage... 00:09:17.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:17.033 23:22:37 -- nvmf/nvmf.sh@10 -- # uname -s 00:09:17.033 23:22:37 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:17.033 23:22:37 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.033 23:22:37 -- nvmf/common.sh@7 -- # uname -s 00:09:17.033 23:22:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.033 23:22:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.033 23:22:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.033 23:22:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.033 23:22:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.033 23:22:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.033 23:22:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.033 23:22:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.033 23:22:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.033 23:22:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.033 23:22:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:17.033 23:22:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:17.033 23:22:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.033 23:22:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.033 23:22:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.033 23:22:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.033 23:22:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.033 23:22:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.033 23:22:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.033 23:22:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.033 23:22:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.033 23:22:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.033 23:22:37 -- paths/export.sh@5 -- # export PATH 00:09:17.033 23:22:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.033 23:22:37 -- nvmf/common.sh@46 -- # : 0 00:09:17.033 23:22:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:17.033 23:22:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:17.033 23:22:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:17.033 23:22:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.292 23:22:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.292 23:22:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:17.292 23:22:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:17.292 23:22:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:17.292 23:22:37 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:17.292 23:22:37 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:17.292 23:22:37 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:17.292 23:22:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:17.292 23:22:37 -- common/autotest_common.sh@10 -- # set +x 00:09:17.292 23:22:37 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:09:17.292 23:22:37 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:17.292 23:22:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:17.292 23:22:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:17.292 23:22:37 -- common/autotest_common.sh@10 -- # set +x 00:09:17.292 ************************************ 00:09:17.292 START TEST nvmf_example 00:09:17.292 ************************************ 00:09:17.292 23:22:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:17.292 * Looking for test storage... 00:09:17.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.292 23:22:38 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.292 23:22:38 -- nvmf/common.sh@7 -- # uname -s 00:09:17.292 23:22:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.292 23:22:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.292 23:22:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.292 23:22:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.292 23:22:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.292 23:22:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.292 23:22:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.292 23:22:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.292 23:22:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.293 23:22:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.293 23:22:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:17.293 23:22:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:17.293 23:22:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.293 23:22:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.293 23:22:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.293 23:22:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.293 23:22:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.293 23:22:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.293 23:22:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.293 23:22:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.293 23:22:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.293 23:22:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.293 23:22:38 -- paths/export.sh@5 -- # export PATH 00:09:17.293 23:22:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.293 23:22:38 -- nvmf/common.sh@46 -- # : 0 00:09:17.293 23:22:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:17.293 23:22:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:17.293 23:22:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:17.293 23:22:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.293 23:22:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.293 23:22:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:17.293 23:22:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:17.293 23:22:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:17.293 23:22:38 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:17.293 23:22:38 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:17.293 23:22:38 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:17.293 23:22:38 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:17.293 23:22:38 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:17.293 23:22:38 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:17.293 23:22:38 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:17.293 23:22:38 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:17.293 23:22:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:17.293 23:22:38 -- common/autotest_common.sh@10 -- # set +x 00:09:17.293 23:22:38 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:17.293 23:22:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:17.293 23:22:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.293 23:22:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:17.293 23:22:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:17.293 23:22:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:17.293 23:22:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.293 23:22:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:17.293 23:22:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.293 23:22:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:17.293 23:22:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:17.293 23:22:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:17.293 23:22:38 -- common/autotest_common.sh@10 -- # set +x 00:09:19.827 23:22:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:19.827 23:22:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:19.827 23:22:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:19.827 23:22:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:19.827 23:22:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:19.827 23:22:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:19.827 23:22:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:19.827 23:22:40 -- nvmf/common.sh@294 -- # net_devs=() 00:09:19.827 23:22:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:19.827 23:22:40 -- nvmf/common.sh@295 -- # e810=() 00:09:19.827 23:22:40 -- nvmf/common.sh@295 -- # local -ga e810 00:09:19.827 23:22:40 -- nvmf/common.sh@296 -- # x722=() 00:09:19.827 23:22:40 -- nvmf/common.sh@296 -- # local -ga x722 00:09:19.827 23:22:40 -- nvmf/common.sh@297 -- # mlx=() 00:09:19.827 23:22:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:19.827 23:22:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.827 23:22:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.827 23:22:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.827 23:22:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.827 23:22:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.827 23:22:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.827 23:22:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.827 23:22:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.827 23:22:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.827 23:22:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.827 23:22:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.827 23:22:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:19.827 23:22:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:19.827 23:22:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:19.827 23:22:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:19.827 23:22:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:19.827 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:19.827 23:22:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:19.827 23:22:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:19.827 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:19.827 23:22:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:19.827 23:22:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:19.827 23:22:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:19.827 23:22:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.827 23:22:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:19.827 23:22:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.827 23:22:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:19.827 Found net devices under 0000:84:00.0: cvl_0_0 00:09:19.827 23:22:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.827 23:22:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:19.827 23:22:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.828 23:22:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:19.828 23:22:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.828 23:22:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:19.828 Found net devices under 0000:84:00.1: cvl_0_1 00:09:19.828 23:22:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.828 23:22:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:19.828 23:22:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:19.828 23:22:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:19.828 23:22:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:19.828 23:22:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:19.828 23:22:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.828 23:22:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.828 23:22:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:19.828 23:22:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:19.828 23:22:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:19.828 23:22:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:19.828 23:22:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:19.828 23:22:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:19.828 23:22:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.828 23:22:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:19.828 23:22:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:19.828 23:22:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:19.828 23:22:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:19.828 23:22:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:19.828 23:22:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.087 23:22:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:20.087 23:22:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.087 23:22:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.087 23:22:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.087 23:22:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:20.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:09:20.087 00:09:20.087 --- 10.0.0.2 ping statistics --- 00:09:20.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.087 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:09:20.087 23:22:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:09:20.087 00:09:20.087 --- 10.0.0.1 ping statistics --- 00:09:20.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.087 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:09:20.087 23:22:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.087 23:22:40 -- nvmf/common.sh@410 -- # return 0 00:09:20.087 23:22:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:20.087 23:22:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.087 23:22:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:20.087 23:22:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:20.087 23:22:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.087 23:22:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:20.087 23:22:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:20.087 23:22:40 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:20.087 23:22:40 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:20.087 23:22:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:20.087 23:22:40 -- common/autotest_common.sh@10 -- # set +x 00:09:20.087 23:22:40 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:20.087 23:22:40 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:20.087 23:22:40 -- target/nvmf_example.sh@34 -- # nvmfpid=151251 00:09:20.087 23:22:40 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:20.087 23:22:40 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:20.087 23:22:40 -- target/nvmf_example.sh@36 -- # waitforlisten 151251 00:09:20.087 23:22:40 -- common/autotest_common.sh@819 -- # '[' -z 151251 ']' 00:09:20.087 23:22:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.087 23:22:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:20.087 23:22:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.087 23:22:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:20.087 23:22:40 -- common/autotest_common.sh@10 -- # set +x 00:09:20.087 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.653 23:22:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:20.654 23:22:41 -- common/autotest_common.sh@852 -- # return 0 00:09:20.654 23:22:41 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:20.654 23:22:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:20.654 23:22:41 -- common/autotest_common.sh@10 -- # set +x 00:09:20.654 23:22:41 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:20.654 23:22:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.654 23:22:41 -- common/autotest_common.sh@10 -- # set +x 00:09:20.654 23:22:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.654 23:22:41 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:20.654 23:22:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.654 23:22:41 -- common/autotest_common.sh@10 -- # set +x 00:09:20.654 23:22:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.654 23:22:41 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:20.654 23:22:41 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:20.654 23:22:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.654 23:22:41 -- common/autotest_common.sh@10 -- # set +x 00:09:20.654 23:22:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.654 23:22:41 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:20.654 23:22:41 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:20.654 23:22:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.654 23:22:41 -- common/autotest_common.sh@10 -- # set +x 00:09:20.654 23:22:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.654 23:22:41 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.654 23:22:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.654 23:22:41 -- common/autotest_common.sh@10 -- # set +x 00:09:20.654 23:22:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.654 23:22:41 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:20.654 23:22:41 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:20.654 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.862 Initializing NVMe Controllers 00:09:32.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:32.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:32.862 Initialization complete. Launching workers. 00:09:32.862 ======================================================== 00:09:32.862 Latency(us) 00:09:32.862 Device Information : IOPS MiB/s Average min max 00:09:32.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14893.01 58.18 4296.88 854.35 19179.83 00:09:32.862 ======================================================== 00:09:32.862 Total : 14893.01 58.18 4296.88 854.35 19179.83 00:09:32.862 00:09:32.862 23:22:51 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:32.862 23:22:51 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:32.862 23:22:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:32.862 23:22:51 -- nvmf/common.sh@116 -- # sync 00:09:32.862 23:22:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:32.862 23:22:51 -- nvmf/common.sh@119 -- # set +e 00:09:32.862 23:22:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:32.862 23:22:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:32.862 rmmod nvme_tcp 00:09:32.862 rmmod nvme_fabrics 00:09:32.862 rmmod nvme_keyring 00:09:32.862 23:22:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:32.862 23:22:51 -- nvmf/common.sh@123 -- # set -e 00:09:32.862 23:22:51 -- nvmf/common.sh@124 -- # return 0 00:09:32.862 23:22:51 -- nvmf/common.sh@477 -- # '[' -n 151251 ']' 00:09:32.862 23:22:51 -- nvmf/common.sh@478 -- # killprocess 151251 00:09:32.862 23:22:51 -- common/autotest_common.sh@926 -- # '[' -z 151251 ']' 00:09:32.862 23:22:51 -- common/autotest_common.sh@930 -- # kill -0 151251 00:09:32.862 23:22:51 -- common/autotest_common.sh@931 -- # uname 00:09:32.862 23:22:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:32.863 23:22:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 151251 00:09:32.863 23:22:51 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:09:32.863 23:22:51 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:09:32.863 23:22:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 151251' 00:09:32.863 killing process with pid 151251 00:09:32.863 23:22:51 -- common/autotest_common.sh@945 -- # kill 151251 00:09:32.863 23:22:51 -- common/autotest_common.sh@950 -- # wait 151251 00:09:32.863 nvmf threads initialize successfully 00:09:32.863 bdev subsystem init successfully 00:09:32.863 created a nvmf target service 00:09:32.863 create targets's poll groups done 00:09:32.863 all subsystems of target started 00:09:32.863 nvmf target is running 00:09:32.863 all subsystems of target stopped 00:09:32.863 destroy targets's poll groups done 00:09:32.863 destroyed the nvmf target service 00:09:32.863 bdev subsystem finish successfully 00:09:32.863 nvmf threads destroy successfully 00:09:32.863 23:22:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:32.863 23:22:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:32.863 23:22:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:32.863 23:22:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:32.863 23:22:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:32.863 23:22:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.863 23:22:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.863 23:22:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.465 23:22:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:09:33.465 23:22:54 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:33.465 23:22:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:33.465 23:22:54 -- common/autotest_common.sh@10 -- # set +x 00:09:33.465 00:09:33.465 real 0m16.129s 00:09:33.465 user 0m42.904s 00:09:33.465 sys 0m4.237s 00:09:33.465 23:22:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.465 23:22:54 -- common/autotest_common.sh@10 -- # set +x 00:09:33.465 ************************************ 00:09:33.465 END TEST nvmf_example 00:09:33.465 ************************************ 00:09:33.465 23:22:54 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:33.465 23:22:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:33.465 23:22:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:33.465 23:22:54 -- common/autotest_common.sh@10 -- # set +x 00:09:33.465 ************************************ 00:09:33.465 START TEST nvmf_filesystem 00:09:33.465 ************************************ 00:09:33.465 23:22:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:33.465 * Looking for test storage... 00:09:33.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.465 23:22:54 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:33.465 23:22:54 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:33.465 23:22:54 -- common/autotest_common.sh@34 -- # set -e 00:09:33.465 23:22:54 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:33.465 23:22:54 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:33.465 23:22:54 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:33.465 23:22:54 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:33.465 23:22:54 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:33.465 23:22:54 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:33.465 23:22:54 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:33.465 23:22:54 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:33.465 23:22:54 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:33.465 23:22:54 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:33.465 23:22:54 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:33.465 23:22:54 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:33.465 23:22:54 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:33.465 23:22:54 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:33.465 23:22:54 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:33.465 23:22:54 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:33.465 23:22:54 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:33.465 23:22:54 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:33.465 23:22:54 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:33.465 23:22:54 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:33.465 23:22:54 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:33.465 23:22:54 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:33.465 23:22:54 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:33.465 23:22:54 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:33.465 23:22:54 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:33.465 23:22:54 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:33.465 23:22:54 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:33.465 23:22:54 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:33.465 23:22:54 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:33.465 23:22:54 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:33.465 23:22:54 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:33.465 23:22:54 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:33.465 23:22:54 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:33.465 23:22:54 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:33.465 23:22:54 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:33.465 23:22:54 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:33.465 23:22:54 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:33.465 23:22:54 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:33.465 23:22:54 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:33.465 23:22:54 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:09:33.465 23:22:54 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:33.465 23:22:54 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:33.465 23:22:54 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:33.465 23:22:54 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:33.465 23:22:54 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:33.465 23:22:54 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:33.465 23:22:54 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:33.465 23:22:54 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:33.465 23:22:54 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:33.465 23:22:54 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:09:33.465 23:22:54 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:09:33.465 23:22:54 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:33.465 23:22:54 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:09:33.465 23:22:54 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:09:33.465 23:22:54 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:09:33.465 23:22:54 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:09:33.465 23:22:54 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:09:33.465 23:22:54 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:09:33.465 23:22:54 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:09:33.465 23:22:54 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:09:33.465 23:22:54 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:09:33.465 23:22:54 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:09:33.465 23:22:54 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:09:33.465 23:22:54 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:09:33.465 23:22:54 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:33.465 23:22:54 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:09:33.465 23:22:54 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:09:33.465 23:22:54 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:09:33.465 23:22:54 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:09:33.465 23:22:54 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:33.465 23:22:54 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:09:33.465 23:22:54 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:09:33.465 23:22:54 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:09:33.465 23:22:54 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:09:33.465 23:22:54 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:09:33.465 23:22:54 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:09:33.465 23:22:54 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:09:33.465 23:22:54 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:09:33.465 23:22:54 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:09:33.465 23:22:54 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:09:33.465 23:22:54 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:33.465 23:22:54 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:09:33.465 23:22:54 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:09:33.465 23:22:54 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:33.465 23:22:54 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:33.465 23:22:54 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:33.465 23:22:54 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:33.465 23:22:54 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:33.465 23:22:54 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:33.465 23:22:54 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:33.465 23:22:54 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:33.465 23:22:54 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:33.465 23:22:54 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:33.465 23:22:54 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:33.465 23:22:54 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:33.465 23:22:54 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:33.465 23:22:54 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:33.465 23:22:54 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:33.465 23:22:54 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:33.465 #define SPDK_CONFIG_H 00:09:33.465 #define SPDK_CONFIG_APPS 1 00:09:33.465 #define SPDK_CONFIG_ARCH native 00:09:33.465 #undef SPDK_CONFIG_ASAN 00:09:33.465 #undef SPDK_CONFIG_AVAHI 00:09:33.465 #undef SPDK_CONFIG_CET 00:09:33.465 #define SPDK_CONFIG_COVERAGE 1 00:09:33.465 #define SPDK_CONFIG_CROSS_PREFIX 00:09:33.465 #undef SPDK_CONFIG_CRYPTO 00:09:33.465 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:33.465 #undef SPDK_CONFIG_CUSTOMOCF 00:09:33.465 #undef SPDK_CONFIG_DAOS 00:09:33.465 #define SPDK_CONFIG_DAOS_DIR 00:09:33.465 #define SPDK_CONFIG_DEBUG 1 00:09:33.465 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:33.465 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:09:33.465 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:33.465 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:33.465 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:33.466 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:33.466 #define SPDK_CONFIG_EXAMPLES 1 00:09:33.466 #undef SPDK_CONFIG_FC 00:09:33.466 #define SPDK_CONFIG_FC_PATH 00:09:33.466 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:33.466 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:33.466 #undef SPDK_CONFIG_FUSE 00:09:33.466 #undef SPDK_CONFIG_FUZZER 00:09:33.466 #define SPDK_CONFIG_FUZZER_LIB 00:09:33.466 #undef SPDK_CONFIG_GOLANG 00:09:33.466 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:33.466 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:33.466 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:33.466 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:33.466 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:33.466 #define SPDK_CONFIG_IDXD 1 00:09:33.466 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:33.466 #undef SPDK_CONFIG_IPSEC_MB 00:09:33.466 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:33.466 #define SPDK_CONFIG_ISAL 1 00:09:33.466 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:33.466 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:33.466 #define SPDK_CONFIG_LIBDIR 00:09:33.466 #undef SPDK_CONFIG_LTO 00:09:33.466 #define SPDK_CONFIG_MAX_LCORES 00:09:33.466 #define SPDK_CONFIG_NVME_CUSE 1 00:09:33.466 #undef SPDK_CONFIG_OCF 00:09:33.466 #define SPDK_CONFIG_OCF_PATH 00:09:33.466 #define SPDK_CONFIG_OPENSSL_PATH 00:09:33.466 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:33.466 #undef SPDK_CONFIG_PGO_USE 00:09:33.466 #define SPDK_CONFIG_PREFIX /usr/local 00:09:33.466 #undef SPDK_CONFIG_RAID5F 00:09:33.466 #undef SPDK_CONFIG_RBD 00:09:33.466 #define SPDK_CONFIG_RDMA 1 00:09:33.466 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:33.466 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:33.466 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:33.466 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:33.466 #define SPDK_CONFIG_SHARED 1 00:09:33.466 #undef SPDK_CONFIG_SMA 00:09:33.466 #define SPDK_CONFIG_TESTS 1 00:09:33.466 #undef SPDK_CONFIG_TSAN 00:09:33.466 #define SPDK_CONFIG_UBLK 1 00:09:33.466 #define SPDK_CONFIG_UBSAN 1 00:09:33.466 #undef SPDK_CONFIG_UNIT_TESTS 00:09:33.466 #undef SPDK_CONFIG_URING 00:09:33.466 #define SPDK_CONFIG_URING_PATH 00:09:33.466 #undef SPDK_CONFIG_URING_ZNS 00:09:33.466 #undef SPDK_CONFIG_USDT 00:09:33.466 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:33.466 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:33.466 #define SPDK_CONFIG_VFIO_USER 1 00:09:33.466 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:33.466 #define SPDK_CONFIG_VHOST 1 00:09:33.466 #define SPDK_CONFIG_VIRTIO 1 00:09:33.466 #undef SPDK_CONFIG_VTUNE 00:09:33.466 #define SPDK_CONFIG_VTUNE_DIR 00:09:33.466 #define SPDK_CONFIG_WERROR 1 00:09:33.466 #define SPDK_CONFIG_WPDK_DIR 00:09:33.466 #undef SPDK_CONFIG_XNVME 00:09:33.466 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:33.466 23:22:54 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:33.466 23:22:54 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.466 23:22:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.466 23:22:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.466 23:22:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.466 23:22:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.466 23:22:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.466 23:22:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.466 23:22:54 -- paths/export.sh@5 -- # export PATH 00:09:33.466 23:22:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.466 23:22:54 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:33.466 23:22:54 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:33.466 23:22:54 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:33.466 23:22:54 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:33.466 23:22:54 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:33.466 23:22:54 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:33.466 23:22:54 -- pm/common@16 -- # TEST_TAG=N/A 00:09:33.466 23:22:54 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:33.466 23:22:54 -- common/autotest_common.sh@52 -- # : 1 00:09:33.466 23:22:54 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:09:33.466 23:22:54 -- common/autotest_common.sh@56 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:33.466 23:22:54 -- common/autotest_common.sh@58 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:09:33.466 23:22:54 -- common/autotest_common.sh@60 -- # : 1 00:09:33.466 23:22:54 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:33.466 23:22:54 -- common/autotest_common.sh@62 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:09:33.466 23:22:54 -- common/autotest_common.sh@64 -- # : 00:09:33.466 23:22:54 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:09:33.466 23:22:54 -- common/autotest_common.sh@66 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:09:33.466 23:22:54 -- common/autotest_common.sh@68 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:09:33.466 23:22:54 -- common/autotest_common.sh@70 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:09:33.466 23:22:54 -- common/autotest_common.sh@72 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:33.466 23:22:54 -- common/autotest_common.sh@74 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:09:33.466 23:22:54 -- common/autotest_common.sh@76 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:09:33.466 23:22:54 -- common/autotest_common.sh@78 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:09:33.466 23:22:54 -- common/autotest_common.sh@80 -- # : 1 00:09:33.466 23:22:54 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:09:33.466 23:22:54 -- common/autotest_common.sh@82 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:09:33.466 23:22:54 -- common/autotest_common.sh@84 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:09:33.466 23:22:54 -- common/autotest_common.sh@86 -- # : 1 00:09:33.466 23:22:54 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:09:33.466 23:22:54 -- common/autotest_common.sh@88 -- # : 1 00:09:33.466 23:22:54 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:09:33.466 23:22:54 -- common/autotest_common.sh@90 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:33.466 23:22:54 -- common/autotest_common.sh@92 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:09:33.466 23:22:54 -- common/autotest_common.sh@94 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:09:33.466 23:22:54 -- common/autotest_common.sh@96 -- # : tcp 00:09:33.466 23:22:54 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:33.466 23:22:54 -- common/autotest_common.sh@98 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:09:33.466 23:22:54 -- common/autotest_common.sh@100 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:09:33.466 23:22:54 -- common/autotest_common.sh@102 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:09:33.466 23:22:54 -- common/autotest_common.sh@104 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:09:33.466 23:22:54 -- common/autotest_common.sh@106 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:09:33.466 23:22:54 -- common/autotest_common.sh@108 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:09:33.466 23:22:54 -- common/autotest_common.sh@110 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:09:33.466 23:22:54 -- common/autotest_common.sh@112 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:33.466 23:22:54 -- common/autotest_common.sh@114 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:09:33.466 23:22:54 -- common/autotest_common.sh@116 -- # : 1 00:09:33.466 23:22:54 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:09:33.466 23:22:54 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:09:33.466 23:22:54 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:33.466 23:22:54 -- common/autotest_common.sh@120 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:09:33.466 23:22:54 -- common/autotest_common.sh@122 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:09:33.466 23:22:54 -- common/autotest_common.sh@124 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:09:33.466 23:22:54 -- common/autotest_common.sh@126 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:09:33.466 23:22:54 -- common/autotest_common.sh@128 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:09:33.466 23:22:54 -- common/autotest_common.sh@130 -- # : 0 00:09:33.466 23:22:54 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:09:33.466 23:22:54 -- common/autotest_common.sh@132 -- # : v23.11 00:09:33.466 23:22:54 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:09:33.466 23:22:54 -- common/autotest_common.sh@134 -- # : true 00:09:33.467 23:22:54 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:09:33.467 23:22:54 -- common/autotest_common.sh@136 -- # : 0 00:09:33.467 23:22:54 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:09:33.467 23:22:54 -- common/autotest_common.sh@138 -- # : 0 00:09:33.467 23:22:54 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:09:33.467 23:22:54 -- common/autotest_common.sh@140 -- # : 0 00:09:33.467 23:22:54 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:09:33.467 23:22:54 -- common/autotest_common.sh@142 -- # : 0 00:09:33.467 23:22:54 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:09:33.467 23:22:54 -- common/autotest_common.sh@144 -- # : 0 00:09:33.467 23:22:54 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:09:33.467 23:22:54 -- common/autotest_common.sh@146 -- # : 0 00:09:33.467 23:22:54 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:09:33.467 23:22:54 -- common/autotest_common.sh@148 -- # : e810 00:09:33.467 23:22:54 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:09:33.467 23:22:54 -- common/autotest_common.sh@150 -- # : 0 00:09:33.467 23:22:54 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:09:33.467 23:22:54 -- common/autotest_common.sh@152 -- # : 0 00:09:33.467 23:22:54 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:09:33.467 23:22:54 -- common/autotest_common.sh@154 -- # : 0 00:09:33.467 23:22:54 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:09:33.467 23:22:54 -- common/autotest_common.sh@156 -- # : 0 00:09:33.467 23:22:54 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:09:33.467 23:22:54 -- common/autotest_common.sh@158 -- # : 0 00:09:33.467 23:22:54 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:09:33.467 23:22:54 -- common/autotest_common.sh@160 -- # : 0 00:09:33.467 23:22:54 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:09:33.467 23:22:54 -- common/autotest_common.sh@163 -- # : 00:09:33.467 23:22:54 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:09:33.467 23:22:54 -- common/autotest_common.sh@165 -- # : 0 00:09:33.467 23:22:54 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:09:33.467 23:22:54 -- common/autotest_common.sh@167 -- # : 0 00:09:33.467 23:22:54 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:33.467 23:22:54 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:33.467 23:22:54 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:33.467 23:22:54 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:33.467 23:22:54 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:33.467 23:22:54 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:33.467 23:22:54 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:33.467 23:22:54 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:33.467 23:22:54 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:33.467 23:22:54 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:33.467 23:22:54 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:33.467 23:22:54 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:33.467 23:22:54 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:33.467 23:22:54 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:33.467 23:22:54 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:09:33.467 23:22:54 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:33.467 23:22:54 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:33.467 23:22:54 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:33.467 23:22:54 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:33.467 23:22:54 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:33.467 23:22:54 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:09:33.467 23:22:54 -- common/autotest_common.sh@196 -- # cat 00:09:33.467 23:22:54 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:09:33.467 23:22:54 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:33.467 23:22:54 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:33.467 23:22:54 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:33.467 23:22:54 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:33.467 23:22:54 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:09:33.467 23:22:54 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:09:33.467 23:22:54 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:33.467 23:22:54 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:33.467 23:22:54 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:33.467 23:22:54 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:33.467 23:22:54 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:33.467 23:22:54 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:33.467 23:22:54 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:33.467 23:22:54 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:33.467 23:22:54 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:33.467 23:22:54 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:33.467 23:22:54 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:33.467 23:22:54 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:33.467 23:22:54 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:09:33.467 23:22:54 -- common/autotest_common.sh@249 -- # export valgrind= 00:09:33.467 23:22:54 -- common/autotest_common.sh@249 -- # valgrind= 00:09:33.467 23:22:54 -- common/autotest_common.sh@255 -- # uname -s 00:09:33.467 23:22:54 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:09:33.467 23:22:54 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:09:33.467 23:22:54 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:09:33.467 23:22:54 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:09:33.467 23:22:54 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:09:33.467 23:22:54 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:09:33.467 23:22:54 -- common/autotest_common.sh@265 -- # MAKE=make 00:09:33.467 23:22:54 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j48 00:09:33.467 23:22:54 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:09:33.467 23:22:54 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:09:33.467 23:22:54 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:33.467 23:22:54 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:09:33.467 23:22:54 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:09:33.467 23:22:54 -- common/autotest_common.sh@291 -- # for i in "$@" 00:09:33.467 23:22:54 -- common/autotest_common.sh@292 -- # case "$i" in 00:09:33.467 23:22:54 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:09:33.467 23:22:54 -- common/autotest_common.sh@309 -- # [[ -z 152997 ]] 00:09:33.467 23:22:54 -- common/autotest_common.sh@309 -- # kill -0 152997 00:09:33.467 23:22:54 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:09:33.467 23:22:54 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:09:33.467 23:22:54 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:09:33.467 23:22:54 -- common/autotest_common.sh@322 -- # local mount target_dir 00:09:33.467 23:22:54 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:09:33.467 23:22:54 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:09:33.467 23:22:54 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:09:33.467 23:22:54 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:09:33.467 23:22:54 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.tvkiAb 00:09:33.467 23:22:54 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:33.467 23:22:54 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:09:33.467 23:22:54 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:09:33.467 23:22:54 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.tvkiAb/tests/target /tmp/spdk.tvkiAb 00:09:33.467 23:22:54 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:09:33.467 23:22:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:33.467 23:22:54 -- common/autotest_common.sh@318 -- # df -T 00:09:33.467 23:22:54 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:09:33.467 23:22:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:09:33.467 23:22:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:09:33.468 23:22:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:09:33.468 23:22:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:09:33.468 23:22:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:09:33.468 23:22:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:33.468 23:22:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:09:33.468 23:22:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:09:33.468 23:22:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=949354496 00:09:33.468 23:22:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:09:33.468 23:22:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=4335075328 00:09:33.468 23:22:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:33.468 23:22:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:09:33.468 23:22:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:09:33.468 23:22:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=36031238144 00:09:33.468 23:22:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=45083312128 00:09:33.468 23:22:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=9052073984 00:09:33.468 23:22:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:33.468 23:22:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:09:33.468 23:22:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:09:33.468 23:22:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=22488137728 00:09:33.468 23:22:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=22541656064 00:09:33.468 23:22:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:09:33.468 23:22:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:33.468 23:22:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:09:33.468 23:22:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:09:33.468 23:22:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=9007878144 00:09:33.468 23:22:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=9016664064 00:09:33.468 23:22:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=8785920 00:09:33.468 23:22:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:33.468 23:22:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:09:33.468 23:22:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:09:33.468 23:22:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=22540509184 00:09:33.468 23:22:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=22541656064 00:09:33.468 23:22:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=1146880 00:09:33.468 23:22:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:33.468 23:22:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:09:33.468 23:22:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:09:33.468 23:22:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=4508323840 00:09:33.468 23:22:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=4508327936 00:09:33.468 23:22:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:09:33.468 23:22:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:33.468 23:22:54 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:09:33.468 * Looking for test storage... 00:09:33.468 23:22:54 -- common/autotest_common.sh@359 -- # local target_space new_size 00:09:33.468 23:22:54 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:09:33.468 23:22:54 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.468 23:22:54 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:33.468 23:22:54 -- common/autotest_common.sh@363 -- # mount=/ 00:09:33.468 23:22:54 -- common/autotest_common.sh@365 -- # target_space=36031238144 00:09:33.468 23:22:54 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:09:33.468 23:22:54 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:09:33.468 23:22:54 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:09:33.468 23:22:54 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:09:33.468 23:22:54 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:09:33.468 23:22:54 -- common/autotest_common.sh@372 -- # new_size=11266666496 00:09:33.468 23:22:54 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:33.468 23:22:54 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.468 23:22:54 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.468 23:22:54 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.468 23:22:54 -- common/autotest_common.sh@380 -- # return 0 00:09:33.468 23:22:54 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:09:33.468 23:22:54 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:09:33.468 23:22:54 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:33.468 23:22:54 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:33.468 23:22:54 -- common/autotest_common.sh@1672 -- # true 00:09:33.468 23:22:54 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:09:33.468 23:22:54 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:33.468 23:22:54 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:33.468 23:22:54 -- common/autotest_common.sh@27 -- # exec 00:09:33.468 23:22:54 -- common/autotest_common.sh@29 -- # exec 00:09:33.468 23:22:54 -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:33.468 23:22:54 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:33.468 23:22:54 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:33.468 23:22:54 -- common/autotest_common.sh@18 -- # set -x 00:09:33.468 23:22:54 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.468 23:22:54 -- nvmf/common.sh@7 -- # uname -s 00:09:33.468 23:22:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.468 23:22:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.468 23:22:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.468 23:22:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.468 23:22:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.468 23:22:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.468 23:22:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.468 23:22:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.468 23:22:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.468 23:22:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.468 23:22:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:33.468 23:22:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:33.468 23:22:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.468 23:22:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.468 23:22:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.468 23:22:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.468 23:22:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.468 23:22:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.468 23:22:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.468 23:22:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.468 23:22:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.468 23:22:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.468 23:22:54 -- paths/export.sh@5 -- # export PATH 00:09:33.468 23:22:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.468 23:22:54 -- nvmf/common.sh@46 -- # : 0 00:09:33.468 23:22:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:33.468 23:22:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:33.468 23:22:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:33.468 23:22:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.468 23:22:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.468 23:22:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:33.468 23:22:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:33.468 23:22:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:33.468 23:22:54 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:33.468 23:22:54 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:33.468 23:22:54 -- target/filesystem.sh@15 -- # nvmftestinit 00:09:33.468 23:22:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:33.468 23:22:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.468 23:22:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:33.468 23:22:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:33.468 23:22:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:33.468 23:22:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.468 23:22:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.468 23:22:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.468 23:22:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:33.468 23:22:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:33.468 23:22:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:33.468 23:22:54 -- common/autotest_common.sh@10 -- # set +x 00:09:35.999 23:22:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:35.999 23:22:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:35.999 23:22:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:35.999 23:22:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:35.999 23:22:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:35.999 23:22:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:35.999 23:22:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:35.999 23:22:56 -- nvmf/common.sh@294 -- # net_devs=() 00:09:35.999 23:22:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:35.999 23:22:56 -- nvmf/common.sh@295 -- # e810=() 00:09:35.999 23:22:56 -- nvmf/common.sh@295 -- # local -ga e810 00:09:35.999 23:22:56 -- nvmf/common.sh@296 -- # x722=() 00:09:35.999 23:22:56 -- nvmf/common.sh@296 -- # local -ga x722 00:09:35.999 23:22:56 -- nvmf/common.sh@297 -- # mlx=() 00:09:35.999 23:22:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:35.999 23:22:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.999 23:22:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.999 23:22:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.999 23:22:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.999 23:22:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.999 23:22:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.999 23:22:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.999 23:22:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.999 23:22:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.999 23:22:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.999 23:22:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.999 23:22:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:35.999 23:22:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:35.999 23:22:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:35.999 23:22:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:35.999 23:22:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:35.999 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:35.999 23:22:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:35.999 23:22:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:35.999 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:35.999 23:22:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:35.999 23:22:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:35.999 23:22:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:35.999 23:22:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.000 23:22:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:36.000 23:22:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.000 23:22:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:36.000 Found net devices under 0000:84:00.0: cvl_0_0 00:09:36.000 23:22:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.000 23:22:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:36.000 23:22:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.000 23:22:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:36.000 23:22:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.000 23:22:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:36.000 Found net devices under 0000:84:00.1: cvl_0_1 00:09:36.000 23:22:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.000 23:22:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:36.000 23:22:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:36.000 23:22:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:36.000 23:22:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:36.000 23:22:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:36.000 23:22:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.000 23:22:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.000 23:22:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:36.000 23:22:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:36.000 23:22:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:36.000 23:22:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:36.000 23:22:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:36.000 23:22:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:36.000 23:22:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.000 23:22:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:36.000 23:22:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:36.000 23:22:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:36.000 23:22:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:36.000 23:22:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:36.000 23:22:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:36.000 23:22:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:36.000 23:22:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:36.000 23:22:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:36.000 23:22:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:36.000 23:22:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:36.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:09:36.000 00:09:36.000 --- 10.0.0.2 ping statistics --- 00:09:36.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.000 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:09:36.000 23:22:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:36.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:09:36.000 00:09:36.000 --- 10.0.0.1 ping statistics --- 00:09:36.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.000 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:09:36.000 23:22:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.000 23:22:56 -- nvmf/common.sh@410 -- # return 0 00:09:36.000 23:22:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:36.000 23:22:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.000 23:22:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:36.000 23:22:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:36.000 23:22:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.000 23:22:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:36.000 23:22:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:36.000 23:22:56 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:36.000 23:22:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:36.000 23:22:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:36.000 23:22:56 -- common/autotest_common.sh@10 -- # set +x 00:09:36.000 ************************************ 00:09:36.000 START TEST nvmf_filesystem_no_in_capsule 00:09:36.000 ************************************ 00:09:36.000 23:22:56 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:09:36.000 23:22:56 -- target/filesystem.sh@47 -- # in_capsule=0 00:09:36.000 23:22:56 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:36.000 23:22:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:36.000 23:22:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:36.000 23:22:56 -- common/autotest_common.sh@10 -- # set +x 00:09:36.258 23:22:56 -- nvmf/common.sh@469 -- # nvmfpid=154641 00:09:36.258 23:22:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:36.258 23:22:56 -- nvmf/common.sh@470 -- # waitforlisten 154641 00:09:36.258 23:22:56 -- common/autotest_common.sh@819 -- # '[' -z 154641 ']' 00:09:36.258 23:22:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.258 23:22:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:36.258 23:22:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.258 23:22:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:36.258 23:22:56 -- common/autotest_common.sh@10 -- # set +x 00:09:36.258 [2024-07-11 23:22:56.999353] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:36.258 [2024-07-11 23:22:56.999448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.258 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.258 [2024-07-11 23:22:57.076693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.258 [2024-07-11 23:22:57.175230] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:36.258 [2024-07-11 23:22:57.175403] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.258 [2024-07-11 23:22:57.175423] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.258 [2024-07-11 23:22:57.175438] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.258 [2024-07-11 23:22:57.175517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.258 [2024-07-11 23:22:57.175573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.258 [2024-07-11 23:22:57.175626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.258 [2024-07-11 23:22:57.175629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.192 23:22:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:37.192 23:22:57 -- common/autotest_common.sh@852 -- # return 0 00:09:37.192 23:22:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:37.192 23:22:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:37.192 23:22:57 -- common/autotest_common.sh@10 -- # set +x 00:09:37.192 23:22:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.192 23:22:57 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:37.192 23:22:57 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:37.192 23:22:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:37.192 23:22:57 -- common/autotest_common.sh@10 -- # set +x 00:09:37.192 [2024-07-11 23:22:58.002783] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.192 23:22:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:37.192 23:22:58 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:37.192 23:22:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:37.192 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:09:37.450 Malloc1 00:09:37.450 23:22:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:37.450 23:22:58 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:37.450 23:22:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:37.450 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:09:37.450 23:22:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:37.450 23:22:58 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:37.450 23:22:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:37.450 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:09:37.450 23:22:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:37.450 23:22:58 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.450 23:22:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:37.450 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:09:37.450 [2024-07-11 23:22:58.196713] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.450 23:22:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:37.450 23:22:58 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:37.450 23:22:58 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:09:37.450 23:22:58 -- common/autotest_common.sh@1358 -- # local bdev_info 00:09:37.450 23:22:58 -- common/autotest_common.sh@1359 -- # local bs 00:09:37.450 23:22:58 -- common/autotest_common.sh@1360 -- # local nb 00:09:37.450 23:22:58 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:37.450 23:22:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:37.450 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:09:37.450 23:22:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:37.450 23:22:58 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:09:37.450 { 00:09:37.450 "name": "Malloc1", 00:09:37.450 "aliases": [ 00:09:37.450 "ffc6ad56-4e89-4819-8ead-f4bcc01adcf2" 00:09:37.450 ], 00:09:37.450 "product_name": "Malloc disk", 00:09:37.450 "block_size": 512, 00:09:37.450 "num_blocks": 1048576, 00:09:37.450 "uuid": "ffc6ad56-4e89-4819-8ead-f4bcc01adcf2", 00:09:37.450 "assigned_rate_limits": { 00:09:37.450 "rw_ios_per_sec": 0, 00:09:37.450 "rw_mbytes_per_sec": 0, 00:09:37.450 "r_mbytes_per_sec": 0, 00:09:37.450 "w_mbytes_per_sec": 0 00:09:37.450 }, 00:09:37.450 "claimed": true, 00:09:37.450 "claim_type": "exclusive_write", 00:09:37.450 "zoned": false, 00:09:37.450 "supported_io_types": { 00:09:37.450 "read": true, 00:09:37.450 "write": true, 00:09:37.450 "unmap": true, 00:09:37.450 "write_zeroes": true, 00:09:37.450 "flush": true, 00:09:37.450 "reset": true, 00:09:37.450 "compare": false, 00:09:37.450 "compare_and_write": false, 00:09:37.450 "abort": true, 00:09:37.450 "nvme_admin": false, 00:09:37.450 "nvme_io": false 00:09:37.450 }, 00:09:37.450 "memory_domains": [ 00:09:37.450 { 00:09:37.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.450 "dma_device_type": 2 00:09:37.450 } 00:09:37.450 ], 00:09:37.450 "driver_specific": {} 00:09:37.450 } 00:09:37.450 ]' 00:09:37.450 23:22:58 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:09:37.450 23:22:58 -- common/autotest_common.sh@1362 -- # bs=512 00:09:37.450 23:22:58 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:09:37.450 23:22:58 -- common/autotest_common.sh@1363 -- # nb=1048576 00:09:37.450 23:22:58 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:09:37.450 23:22:58 -- common/autotest_common.sh@1367 -- # echo 512 00:09:37.450 23:22:58 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:37.451 23:22:58 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:38.016 23:22:58 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:38.016 23:22:58 -- common/autotest_common.sh@1177 -- # local i=0 00:09:38.016 23:22:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.016 23:22:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:09:38.016 23:22:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:09:40.544 23:23:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:09:40.544 23:23:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:09:40.544 23:23:00 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.544 23:23:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:09:40.544 23:23:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.544 23:23:00 -- common/autotest_common.sh@1187 -- # return 0 00:09:40.544 23:23:00 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:40.544 23:23:00 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:40.544 23:23:00 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:40.544 23:23:00 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:40.544 23:23:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:40.544 23:23:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:40.544 23:23:00 -- setup/common.sh@80 -- # echo 536870912 00:09:40.544 23:23:00 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:40.544 23:23:00 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:40.544 23:23:00 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:40.544 23:23:00 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:40.544 23:23:01 -- target/filesystem.sh@69 -- # partprobe 00:09:40.801 23:23:01 -- target/filesystem.sh@70 -- # sleep 1 00:09:41.737 23:23:02 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:41.737 23:23:02 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:41.737 23:23:02 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:41.737 23:23:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:41.737 23:23:02 -- common/autotest_common.sh@10 -- # set +x 00:09:41.737 ************************************ 00:09:41.737 START TEST filesystem_ext4 00:09:41.737 ************************************ 00:09:41.737 23:23:02 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:41.737 23:23:02 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:41.737 23:23:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:41.737 23:23:02 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:41.737 23:23:02 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:09:41.738 23:23:02 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:09:41.738 23:23:02 -- common/autotest_common.sh@904 -- # local i=0 00:09:41.738 23:23:02 -- common/autotest_common.sh@905 -- # local force 00:09:41.738 23:23:02 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:09:41.738 23:23:02 -- common/autotest_common.sh@908 -- # force=-F 00:09:41.738 23:23:02 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:41.738 mke2fs 1.46.5 (30-Dec-2021) 00:09:41.997 Discarding device blocks: 0/522240 done 00:09:41.997 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:41.997 Filesystem UUID: 50da3ee2-57d5-46ed-b2ac-138f68723304 00:09:41.997 Superblock backups stored on blocks: 00:09:41.997 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:41.997 00:09:41.997 Allocating group tables: 0/64 done 00:09:41.997 Writing inode tables: 0/64 done 00:09:42.929 Creating journal (8192 blocks): done 00:09:43.756 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:09:43.756 00:09:43.756 23:23:04 -- common/autotest_common.sh@921 -- # return 0 00:09:43.756 23:23:04 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:44.692 23:23:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:44.692 23:23:05 -- target/filesystem.sh@25 -- # sync 00:09:44.692 23:23:05 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:44.692 23:23:05 -- target/filesystem.sh@27 -- # sync 00:09:44.692 23:23:05 -- target/filesystem.sh@29 -- # i=0 00:09:44.692 23:23:05 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:44.692 23:23:05 -- target/filesystem.sh@37 -- # kill -0 154641 00:09:44.692 23:23:05 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:44.692 23:23:05 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:44.692 23:23:05 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:44.692 23:23:05 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:44.692 00:09:44.692 real 0m2.920s 00:09:44.692 user 0m0.017s 00:09:44.692 sys 0m0.053s 00:09:44.692 23:23:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.692 23:23:05 -- common/autotest_common.sh@10 -- # set +x 00:09:44.692 ************************************ 00:09:44.692 END TEST filesystem_ext4 00:09:44.692 ************************************ 00:09:44.692 23:23:05 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:44.692 23:23:05 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:44.692 23:23:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:44.692 23:23:05 -- common/autotest_common.sh@10 -- # set +x 00:09:44.692 ************************************ 00:09:44.692 START TEST filesystem_btrfs 00:09:44.692 ************************************ 00:09:44.692 23:23:05 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:44.692 23:23:05 -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:44.692 23:23:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:44.692 23:23:05 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:44.692 23:23:05 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:09:44.692 23:23:05 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:09:44.692 23:23:05 -- common/autotest_common.sh@904 -- # local i=0 00:09:44.692 23:23:05 -- common/autotest_common.sh@905 -- # local force 00:09:44.692 23:23:05 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:09:44.692 23:23:05 -- common/autotest_common.sh@910 -- # force=-f 00:09:44.692 23:23:05 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:44.954 btrfs-progs v6.6.2 00:09:44.954 See https://btrfs.readthedocs.io for more information. 00:09:44.954 00:09:44.954 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:44.954 NOTE: several default settings have changed in version 5.15, please make sure 00:09:44.954 this does not affect your deployments: 00:09:44.954 - DUP for metadata (-m dup) 00:09:44.954 - enabled no-holes (-O no-holes) 00:09:44.954 - enabled free-space-tree (-R free-space-tree) 00:09:44.954 00:09:44.954 Label: (null) 00:09:44.954 UUID: 9eba54b0-811b-4d88-b2f8-f079ba33d953 00:09:44.954 Node size: 16384 00:09:44.954 Sector size: 4096 00:09:44.954 Filesystem size: 510.00MiB 00:09:44.954 Block group profiles: 00:09:44.954 Data: single 8.00MiB 00:09:44.954 Metadata: DUP 32.00MiB 00:09:44.954 System: DUP 8.00MiB 00:09:44.954 SSD detected: yes 00:09:44.954 Zoned device: no 00:09:44.954 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:44.954 Runtime features: free-space-tree 00:09:44.954 Checksum: crc32c 00:09:44.954 Number of devices: 1 00:09:44.954 Devices: 00:09:44.954 ID SIZE PATH 00:09:44.954 1 510.00MiB /dev/nvme0n1p1 00:09:44.954 00:09:44.954 23:23:05 -- common/autotest_common.sh@921 -- # return 0 00:09:44.954 23:23:05 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:45.888 23:23:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:45.888 23:23:06 -- target/filesystem.sh@25 -- # sync 00:09:45.888 23:23:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:45.888 23:23:06 -- target/filesystem.sh@27 -- # sync 00:09:45.888 23:23:06 -- target/filesystem.sh@29 -- # i=0 00:09:45.888 23:23:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:45.888 23:23:06 -- target/filesystem.sh@37 -- # kill -0 154641 00:09:45.888 23:23:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:45.888 23:23:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:45.888 23:23:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:45.888 23:23:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:45.888 00:09:45.888 real 0m1.233s 00:09:45.888 user 0m0.042s 00:09:45.888 sys 0m0.106s 00:09:45.888 23:23:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.888 23:23:06 -- common/autotest_common.sh@10 -- # set +x 00:09:45.888 ************************************ 00:09:45.888 END TEST filesystem_btrfs 00:09:45.888 ************************************ 00:09:45.888 23:23:06 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:45.888 23:23:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:45.888 23:23:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:45.888 23:23:06 -- common/autotest_common.sh@10 -- # set +x 00:09:45.888 ************************************ 00:09:45.888 START TEST filesystem_xfs 00:09:45.888 ************************************ 00:09:45.888 23:23:06 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:09:45.888 23:23:06 -- target/filesystem.sh@18 -- # fstype=xfs 00:09:45.888 23:23:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:45.888 23:23:06 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:45.888 23:23:06 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:09:45.888 23:23:06 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:09:45.888 23:23:06 -- common/autotest_common.sh@904 -- # local i=0 00:09:45.888 23:23:06 -- common/autotest_common.sh@905 -- # local force 00:09:45.888 23:23:06 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:09:45.888 23:23:06 -- common/autotest_common.sh@910 -- # force=-f 00:09:45.888 23:23:06 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:46.145 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:46.145 = sectsz=512 attr=2, projid32bit=1 00:09:46.145 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:46.145 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:46.145 data = bsize=4096 blocks=130560, imaxpct=25 00:09:46.145 = sunit=0 swidth=0 blks 00:09:46.145 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:46.145 log =internal log bsize=4096 blocks=16384, version=2 00:09:46.145 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:46.145 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:47.079 Discarding blocks...Done. 00:09:47.079 23:23:07 -- common/autotest_common.sh@921 -- # return 0 00:09:47.079 23:23:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:48.991 23:23:09 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:48.991 23:23:09 -- target/filesystem.sh@25 -- # sync 00:09:48.991 23:23:09 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:48.992 23:23:09 -- target/filesystem.sh@27 -- # sync 00:09:48.992 23:23:09 -- target/filesystem.sh@29 -- # i=0 00:09:48.992 23:23:09 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:48.992 23:23:09 -- target/filesystem.sh@37 -- # kill -0 154641 00:09:48.992 23:23:09 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:48.992 23:23:09 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:48.992 23:23:09 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:48.992 23:23:09 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:48.992 00:09:48.992 real 0m3.057s 00:09:48.992 user 0m0.016s 00:09:48.992 sys 0m0.057s 00:09:48.992 23:23:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.992 23:23:09 -- common/autotest_common.sh@10 -- # set +x 00:09:48.992 ************************************ 00:09:48.992 END TEST filesystem_xfs 00:09:48.992 ************************************ 00:09:48.992 23:23:09 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:49.252 23:23:10 -- target/filesystem.sh@93 -- # sync 00:09:49.252 23:23:10 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.513 23:23:10 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:49.513 23:23:10 -- common/autotest_common.sh@1198 -- # local i=0 00:09:49.513 23:23:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:09:49.513 23:23:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.513 23:23:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:49.513 23:23:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.513 23:23:10 -- common/autotest_common.sh@1210 -- # return 0 00:09:49.513 23:23:10 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.513 23:23:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:49.513 23:23:10 -- common/autotest_common.sh@10 -- # set +x 00:09:49.513 23:23:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:49.513 23:23:10 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:49.513 23:23:10 -- target/filesystem.sh@101 -- # killprocess 154641 00:09:49.513 23:23:10 -- common/autotest_common.sh@926 -- # '[' -z 154641 ']' 00:09:49.513 23:23:10 -- common/autotest_common.sh@930 -- # kill -0 154641 00:09:49.513 23:23:10 -- common/autotest_common.sh@931 -- # uname 00:09:49.513 23:23:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:49.513 23:23:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 154641 00:09:49.513 23:23:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:49.513 23:23:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:49.513 23:23:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 154641' 00:09:49.513 killing process with pid 154641 00:09:49.513 23:23:10 -- common/autotest_common.sh@945 -- # kill 154641 00:09:49.513 23:23:10 -- common/autotest_common.sh@950 -- # wait 154641 00:09:50.083 23:23:10 -- target/filesystem.sh@102 -- # nvmfpid= 00:09:50.083 00:09:50.083 real 0m13.808s 00:09:50.083 user 0m53.359s 00:09:50.083 sys 0m1.898s 00:09:50.083 23:23:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.083 23:23:10 -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 ************************************ 00:09:50.083 END TEST nvmf_filesystem_no_in_capsule 00:09:50.083 ************************************ 00:09:50.083 23:23:10 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:50.083 23:23:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:50.083 23:23:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:50.083 23:23:10 -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 ************************************ 00:09:50.083 START TEST nvmf_filesystem_in_capsule 00:09:50.083 ************************************ 00:09:50.083 23:23:10 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:09:50.083 23:23:10 -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:50.083 23:23:10 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:50.083 23:23:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:50.083 23:23:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:50.083 23:23:10 -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 23:23:10 -- nvmf/common.sh@469 -- # nvmfpid=156506 00:09:50.083 23:23:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:50.083 23:23:10 -- nvmf/common.sh@470 -- # waitforlisten 156506 00:09:50.083 23:23:10 -- common/autotest_common.sh@819 -- # '[' -z 156506 ']' 00:09:50.083 23:23:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.083 23:23:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:50.083 23:23:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.083 23:23:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:50.083 23:23:10 -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 [2024-07-11 23:23:10.839958] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:50.083 [2024-07-11 23:23:10.840053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.083 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.083 [2024-07-11 23:23:10.917200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:50.083 [2024-07-11 23:23:11.007719] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:50.083 [2024-07-11 23:23:11.007892] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.083 [2024-07-11 23:23:11.007914] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.083 [2024-07-11 23:23:11.007929] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.083 [2024-07-11 23:23:11.008038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.083 [2024-07-11 23:23:11.008114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.083 [2024-07-11 23:23:11.008168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.083 [2024-07-11 23:23:11.008172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.017 23:23:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:51.017 23:23:11 -- common/autotest_common.sh@852 -- # return 0 00:09:51.017 23:23:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:51.017 23:23:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:51.017 23:23:11 -- common/autotest_common.sh@10 -- # set +x 00:09:51.017 23:23:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.017 23:23:11 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:51.017 23:23:11 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:51.017 23:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:51.017 23:23:11 -- common/autotest_common.sh@10 -- # set +x 00:09:51.017 [2024-07-11 23:23:11.892106] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.017 23:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:51.017 23:23:11 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:51.017 23:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:51.017 23:23:11 -- common/autotest_common.sh@10 -- # set +x 00:09:51.276 Malloc1 00:09:51.276 23:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:51.276 23:23:12 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:51.276 23:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:51.276 23:23:12 -- common/autotest_common.sh@10 -- # set +x 00:09:51.276 23:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:51.276 23:23:12 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:51.276 23:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:51.276 23:23:12 -- common/autotest_common.sh@10 -- # set +x 00:09:51.276 23:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:51.276 23:23:12 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.276 23:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:51.276 23:23:12 -- common/autotest_common.sh@10 -- # set +x 00:09:51.276 [2024-07-11 23:23:12.089926] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.276 23:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:51.276 23:23:12 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:51.276 23:23:12 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:09:51.276 23:23:12 -- common/autotest_common.sh@1358 -- # local bdev_info 00:09:51.276 23:23:12 -- common/autotest_common.sh@1359 -- # local bs 00:09:51.276 23:23:12 -- common/autotest_common.sh@1360 -- # local nb 00:09:51.276 23:23:12 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:51.276 23:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:51.276 23:23:12 -- common/autotest_common.sh@10 -- # set +x 00:09:51.276 23:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:51.276 23:23:12 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:09:51.276 { 00:09:51.276 "name": "Malloc1", 00:09:51.276 "aliases": [ 00:09:51.276 "eb635ea8-ad6a-419c-a69e-a2e21c29704a" 00:09:51.276 ], 00:09:51.276 "product_name": "Malloc disk", 00:09:51.276 "block_size": 512, 00:09:51.276 "num_blocks": 1048576, 00:09:51.276 "uuid": "eb635ea8-ad6a-419c-a69e-a2e21c29704a", 00:09:51.276 "assigned_rate_limits": { 00:09:51.276 "rw_ios_per_sec": 0, 00:09:51.276 "rw_mbytes_per_sec": 0, 00:09:51.276 "r_mbytes_per_sec": 0, 00:09:51.276 "w_mbytes_per_sec": 0 00:09:51.276 }, 00:09:51.276 "claimed": true, 00:09:51.276 "claim_type": "exclusive_write", 00:09:51.276 "zoned": false, 00:09:51.276 "supported_io_types": { 00:09:51.276 "read": true, 00:09:51.276 "write": true, 00:09:51.276 "unmap": true, 00:09:51.276 "write_zeroes": true, 00:09:51.276 "flush": true, 00:09:51.276 "reset": true, 00:09:51.276 "compare": false, 00:09:51.276 "compare_and_write": false, 00:09:51.276 "abort": true, 00:09:51.276 "nvme_admin": false, 00:09:51.276 "nvme_io": false 00:09:51.276 }, 00:09:51.276 "memory_domains": [ 00:09:51.276 { 00:09:51.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.276 "dma_device_type": 2 00:09:51.276 } 00:09:51.276 ], 00:09:51.276 "driver_specific": {} 00:09:51.276 } 00:09:51.276 ]' 00:09:51.276 23:23:12 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:09:51.277 23:23:12 -- common/autotest_common.sh@1362 -- # bs=512 00:09:51.277 23:23:12 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:09:51.277 23:23:12 -- common/autotest_common.sh@1363 -- # nb=1048576 00:09:51.277 23:23:12 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:09:51.277 23:23:12 -- common/autotest_common.sh@1367 -- # echo 512 00:09:51.277 23:23:12 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:51.277 23:23:12 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:52.215 23:23:12 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:52.215 23:23:12 -- common/autotest_common.sh@1177 -- # local i=0 00:09:52.215 23:23:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:09:52.215 23:23:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:09:52.215 23:23:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:09:54.122 23:23:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:09:54.122 23:23:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:09:54.122 23:23:14 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:09:54.122 23:23:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:09:54.122 23:23:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:09:54.122 23:23:14 -- common/autotest_common.sh@1187 -- # return 0 00:09:54.122 23:23:14 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:54.122 23:23:14 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:54.122 23:23:14 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:54.122 23:23:14 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:54.122 23:23:14 -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:54.122 23:23:14 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:54.122 23:23:14 -- setup/common.sh@80 -- # echo 536870912 00:09:54.122 23:23:14 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:54.122 23:23:14 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:54.122 23:23:14 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:54.122 23:23:14 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:54.381 23:23:15 -- target/filesystem.sh@69 -- # partprobe 00:09:54.639 23:23:15 -- target/filesystem.sh@70 -- # sleep 1 00:09:55.596 23:23:16 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:55.596 23:23:16 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:55.596 23:23:16 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:55.596 23:23:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.596 23:23:16 -- common/autotest_common.sh@10 -- # set +x 00:09:55.596 ************************************ 00:09:55.596 START TEST filesystem_in_capsule_ext4 00:09:55.596 ************************************ 00:09:55.596 23:23:16 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:55.596 23:23:16 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:55.596 23:23:16 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:55.596 23:23:16 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:55.596 23:23:16 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:09:55.596 23:23:16 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:09:55.596 23:23:16 -- common/autotest_common.sh@904 -- # local i=0 00:09:55.596 23:23:16 -- common/autotest_common.sh@905 -- # local force 00:09:55.596 23:23:16 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:09:55.596 23:23:16 -- common/autotest_common.sh@908 -- # force=-F 00:09:55.596 23:23:16 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:55.858 mke2fs 1.46.5 (30-Dec-2021) 00:09:55.858 Discarding device blocks: 0/522240 done 00:09:55.858 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:55.858 Filesystem UUID: c4fd8e96-9e1f-4fa9-9168-36784ba92cc1 00:09:55.858 Superblock backups stored on blocks: 00:09:55.858 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:55.858 00:09:55.858 Allocating group tables: 0/64 done 00:09:55.858 Writing inode tables: 0/64 done 00:09:56.118 Creating journal (8192 blocks): done 00:09:56.634 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:09:56.634 00:09:56.634 23:23:17 -- common/autotest_common.sh@921 -- # return 0 00:09:56.634 23:23:17 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:57.203 23:23:17 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:57.203 23:23:17 -- target/filesystem.sh@25 -- # sync 00:09:57.203 23:23:17 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:57.203 23:23:17 -- target/filesystem.sh@27 -- # sync 00:09:57.203 23:23:17 -- target/filesystem.sh@29 -- # i=0 00:09:57.203 23:23:17 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:57.203 23:23:17 -- target/filesystem.sh@37 -- # kill -0 156506 00:09:57.203 23:23:17 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:57.203 23:23:17 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:57.203 23:23:17 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:57.203 23:23:17 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:57.203 00:09:57.203 real 0m1.432s 00:09:57.203 user 0m0.016s 00:09:57.203 sys 0m0.063s 00:09:57.203 23:23:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.203 23:23:17 -- common/autotest_common.sh@10 -- # set +x 00:09:57.203 ************************************ 00:09:57.203 END TEST filesystem_in_capsule_ext4 00:09:57.203 ************************************ 00:09:57.203 23:23:17 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:57.203 23:23:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:57.203 23:23:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:57.203 23:23:17 -- common/autotest_common.sh@10 -- # set +x 00:09:57.203 ************************************ 00:09:57.203 START TEST filesystem_in_capsule_btrfs 00:09:57.203 ************************************ 00:09:57.203 23:23:17 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:57.203 23:23:17 -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:57.203 23:23:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:57.203 23:23:17 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:57.203 23:23:17 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:09:57.203 23:23:17 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:09:57.203 23:23:17 -- common/autotest_common.sh@904 -- # local i=0 00:09:57.203 23:23:17 -- common/autotest_common.sh@905 -- # local force 00:09:57.203 23:23:17 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:09:57.203 23:23:17 -- common/autotest_common.sh@910 -- # force=-f 00:09:57.203 23:23:17 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:57.461 btrfs-progs v6.6.2 00:09:57.461 See https://btrfs.readthedocs.io for more information. 00:09:57.461 00:09:57.461 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:57.461 NOTE: several default settings have changed in version 5.15, please make sure 00:09:57.461 this does not affect your deployments: 00:09:57.461 - DUP for metadata (-m dup) 00:09:57.461 - enabled no-holes (-O no-holes) 00:09:57.461 - enabled free-space-tree (-R free-space-tree) 00:09:57.461 00:09:57.461 Label: (null) 00:09:57.461 UUID: 9daa8e29-8d0f-4e10-bdc6-6946b1c0d01d 00:09:57.461 Node size: 16384 00:09:57.461 Sector size: 4096 00:09:57.461 Filesystem size: 510.00MiB 00:09:57.461 Block group profiles: 00:09:57.461 Data: single 8.00MiB 00:09:57.461 Metadata: DUP 32.00MiB 00:09:57.461 System: DUP 8.00MiB 00:09:57.461 SSD detected: yes 00:09:57.461 Zoned device: no 00:09:57.461 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:57.461 Runtime features: free-space-tree 00:09:57.461 Checksum: crc32c 00:09:57.461 Number of devices: 1 00:09:57.461 Devices: 00:09:57.461 ID SIZE PATH 00:09:57.461 1 510.00MiB /dev/nvme0n1p1 00:09:57.461 00:09:57.461 23:23:18 -- common/autotest_common.sh@921 -- # return 0 00:09:57.461 23:23:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:58.027 23:23:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:58.027 23:23:18 -- target/filesystem.sh@25 -- # sync 00:09:58.027 23:23:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:58.027 23:23:18 -- target/filesystem.sh@27 -- # sync 00:09:58.027 23:23:18 -- target/filesystem.sh@29 -- # i=0 00:09:58.027 23:23:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:58.288 23:23:19 -- target/filesystem.sh@37 -- # kill -0 156506 00:09:58.288 23:23:19 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:58.288 23:23:19 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:58.288 23:23:19 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:58.288 23:23:19 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:58.288 00:09:58.288 real 0m1.038s 00:09:58.288 user 0m0.020s 00:09:58.288 sys 0m0.129s 00:09:58.288 23:23:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.288 23:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:58.288 ************************************ 00:09:58.288 END TEST filesystem_in_capsule_btrfs 00:09:58.288 ************************************ 00:09:58.288 23:23:19 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:58.288 23:23:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:58.288 23:23:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:58.288 23:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:58.288 ************************************ 00:09:58.288 START TEST filesystem_in_capsule_xfs 00:09:58.288 ************************************ 00:09:58.288 23:23:19 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:09:58.288 23:23:19 -- target/filesystem.sh@18 -- # fstype=xfs 00:09:58.288 23:23:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:58.288 23:23:19 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:58.288 23:23:19 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:09:58.288 23:23:19 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:09:58.288 23:23:19 -- common/autotest_common.sh@904 -- # local i=0 00:09:58.288 23:23:19 -- common/autotest_common.sh@905 -- # local force 00:09:58.288 23:23:19 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:09:58.288 23:23:19 -- common/autotest_common.sh@910 -- # force=-f 00:09:58.288 23:23:19 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:58.288 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:58.288 = sectsz=512 attr=2, projid32bit=1 00:09:58.288 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:58.288 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:58.288 data = bsize=4096 blocks=130560, imaxpct=25 00:09:58.288 = sunit=0 swidth=0 blks 00:09:58.288 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:58.288 log =internal log bsize=4096 blocks=16384, version=2 00:09:58.288 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:58.288 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:59.221 Discarding blocks...Done. 00:09:59.221 23:23:19 -- common/autotest_common.sh@921 -- # return 0 00:09:59.221 23:23:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:01.119 23:23:22 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:01.119 23:23:22 -- target/filesystem.sh@25 -- # sync 00:10:01.119 23:23:22 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:01.119 23:23:22 -- target/filesystem.sh@27 -- # sync 00:10:01.119 23:23:22 -- target/filesystem.sh@29 -- # i=0 00:10:01.119 23:23:22 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:01.119 23:23:22 -- target/filesystem.sh@37 -- # kill -0 156506 00:10:01.119 23:23:22 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:01.119 23:23:22 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:01.119 23:23:22 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:01.119 23:23:22 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:01.376 00:10:01.376 real 0m3.022s 00:10:01.376 user 0m0.022s 00:10:01.376 sys 0m0.062s 00:10:01.376 23:23:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.376 23:23:22 -- common/autotest_common.sh@10 -- # set +x 00:10:01.376 ************************************ 00:10:01.376 END TEST filesystem_in_capsule_xfs 00:10:01.376 ************************************ 00:10:01.376 23:23:22 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:01.635 23:23:22 -- target/filesystem.sh@93 -- # sync 00:10:01.635 23:23:22 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:01.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.635 23:23:22 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:01.635 23:23:22 -- common/autotest_common.sh@1198 -- # local i=0 00:10:01.635 23:23:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:10:01.635 23:23:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.635 23:23:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:01.635 23:23:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.635 23:23:22 -- common/autotest_common.sh@1210 -- # return 0 00:10:01.635 23:23:22 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:01.635 23:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:01.635 23:23:22 -- common/autotest_common.sh@10 -- # set +x 00:10:01.635 23:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:01.635 23:23:22 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:01.635 23:23:22 -- target/filesystem.sh@101 -- # killprocess 156506 00:10:01.635 23:23:22 -- common/autotest_common.sh@926 -- # '[' -z 156506 ']' 00:10:01.635 23:23:22 -- common/autotest_common.sh@930 -- # kill -0 156506 00:10:01.635 23:23:22 -- common/autotest_common.sh@931 -- # uname 00:10:01.635 23:23:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:01.635 23:23:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 156506 00:10:01.636 23:23:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:01.636 23:23:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:01.636 23:23:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 156506' 00:10:01.636 killing process with pid 156506 00:10:01.636 23:23:22 -- common/autotest_common.sh@945 -- # kill 156506 00:10:01.636 23:23:22 -- common/autotest_common.sh@950 -- # wait 156506 00:10:02.202 23:23:23 -- target/filesystem.sh@102 -- # nvmfpid= 00:10:02.202 00:10:02.202 real 0m12.241s 00:10:02.202 user 0m47.246s 00:10:02.202 sys 0m1.851s 00:10:02.202 23:23:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.202 23:23:23 -- common/autotest_common.sh@10 -- # set +x 00:10:02.202 ************************************ 00:10:02.202 END TEST nvmf_filesystem_in_capsule 00:10:02.202 ************************************ 00:10:02.202 23:23:23 -- target/filesystem.sh@108 -- # nvmftestfini 00:10:02.202 23:23:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:02.202 23:23:23 -- nvmf/common.sh@116 -- # sync 00:10:02.203 23:23:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:02.203 23:23:23 -- nvmf/common.sh@119 -- # set +e 00:10:02.203 23:23:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:02.203 23:23:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:02.203 rmmod nvme_tcp 00:10:02.203 rmmod nvme_fabrics 00:10:02.203 rmmod nvme_keyring 00:10:02.203 23:23:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:02.203 23:23:23 -- nvmf/common.sh@123 -- # set -e 00:10:02.203 23:23:23 -- nvmf/common.sh@124 -- # return 0 00:10:02.203 23:23:23 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:10:02.203 23:23:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:02.203 23:23:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:02.203 23:23:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:02.203 23:23:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:02.203 23:23:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:02.203 23:23:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.203 23:23:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:02.203 23:23:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.739 23:23:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:10:04.739 00:10:04.739 real 0m31.006s 00:10:04.739 user 1m41.551s 00:10:04.739 sys 0m5.779s 00:10:04.739 23:23:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.739 23:23:25 -- common/autotest_common.sh@10 -- # set +x 00:10:04.739 ************************************ 00:10:04.739 END TEST nvmf_filesystem 00:10:04.739 ************************************ 00:10:04.739 23:23:25 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:04.739 23:23:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:04.739 23:23:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:04.739 23:23:25 -- common/autotest_common.sh@10 -- # set +x 00:10:04.739 ************************************ 00:10:04.739 START TEST nvmf_discovery 00:10:04.739 ************************************ 00:10:04.739 23:23:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:04.739 * Looking for test storage... 00:10:04.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.739 23:23:25 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.739 23:23:25 -- nvmf/common.sh@7 -- # uname -s 00:10:04.739 23:23:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.739 23:23:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.739 23:23:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.739 23:23:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.739 23:23:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.739 23:23:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.739 23:23:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.739 23:23:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.739 23:23:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.739 23:23:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.739 23:23:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:04.739 23:23:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:04.739 23:23:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.739 23:23:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.739 23:23:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.739 23:23:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.739 23:23:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.739 23:23:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.739 23:23:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.739 23:23:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.739 23:23:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.739 23:23:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.739 23:23:25 -- paths/export.sh@5 -- # export PATH 00:10:04.739 23:23:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.739 23:23:25 -- nvmf/common.sh@46 -- # : 0 00:10:04.739 23:23:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:04.739 23:23:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:04.739 23:23:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:04.739 23:23:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.739 23:23:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.739 23:23:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:04.739 23:23:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:04.739 23:23:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:04.739 23:23:25 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:04.739 23:23:25 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:04.740 23:23:25 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:04.740 23:23:25 -- target/discovery.sh@15 -- # hash nvme 00:10:04.740 23:23:25 -- target/discovery.sh@20 -- # nvmftestinit 00:10:04.740 23:23:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:04.740 23:23:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.740 23:23:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:04.740 23:23:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:04.740 23:23:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:04.740 23:23:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.740 23:23:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.740 23:23:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.740 23:23:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:10:04.740 23:23:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:10:04.740 23:23:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:10:04.740 23:23:25 -- common/autotest_common.sh@10 -- # set +x 00:10:07.277 23:23:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:07.277 23:23:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:10:07.277 23:23:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:10:07.277 23:23:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:10:07.277 23:23:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:10:07.277 23:23:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:10:07.277 23:23:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:10:07.277 23:23:27 -- nvmf/common.sh@294 -- # net_devs=() 00:10:07.277 23:23:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:10:07.277 23:23:27 -- nvmf/common.sh@295 -- # e810=() 00:10:07.277 23:23:27 -- nvmf/common.sh@295 -- # local -ga e810 00:10:07.277 23:23:27 -- nvmf/common.sh@296 -- # x722=() 00:10:07.277 23:23:27 -- nvmf/common.sh@296 -- # local -ga x722 00:10:07.277 23:23:27 -- nvmf/common.sh@297 -- # mlx=() 00:10:07.278 23:23:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:10:07.278 23:23:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.278 23:23:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.278 23:23:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.278 23:23:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.278 23:23:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.278 23:23:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.278 23:23:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.278 23:23:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.278 23:23:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.278 23:23:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.278 23:23:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.278 23:23:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:10:07.278 23:23:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:10:07.278 23:23:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:10:07.278 23:23:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:07.278 23:23:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:07.278 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:07.278 23:23:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:07.278 23:23:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:07.278 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:07.278 23:23:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:10:07.278 23:23:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:07.278 23:23:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.278 23:23:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:07.278 23:23:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.278 23:23:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:07.278 Found net devices under 0000:84:00.0: cvl_0_0 00:10:07.278 23:23:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.278 23:23:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:07.278 23:23:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.278 23:23:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:07.278 23:23:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.278 23:23:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:07.278 Found net devices under 0000:84:00.1: cvl_0_1 00:10:07.278 23:23:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.278 23:23:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:10:07.278 23:23:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:10:07.278 23:23:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:10:07.278 23:23:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:10:07.278 23:23:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.278 23:23:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.278 23:23:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.278 23:23:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:10:07.278 23:23:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.278 23:23:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.278 23:23:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:10:07.278 23:23:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.278 23:23:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.278 23:23:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:10:07.278 23:23:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:10:07.278 23:23:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.278 23:23:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.278 23:23:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.278 23:23:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.278 23:23:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:10:07.278 23:23:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.278 23:23:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.278 23:23:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.278 23:23:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:10:07.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:10:07.278 00:10:07.278 --- 10.0.0.2 ping statistics --- 00:10:07.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.278 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:10:07.278 23:23:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:10:07.278 00:10:07.278 --- 10.0.0.1 ping statistics --- 00:10:07.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.278 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:10:07.278 23:23:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.278 23:23:28 -- nvmf/common.sh@410 -- # return 0 00:10:07.278 23:23:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:07.278 23:23:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.278 23:23:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:07.278 23:23:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:07.278 23:23:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.278 23:23:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:07.278 23:23:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:07.278 23:23:28 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:07.278 23:23:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:07.278 23:23:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:07.278 23:23:28 -- common/autotest_common.sh@10 -- # set +x 00:10:07.278 23:23:28 -- nvmf/common.sh@469 -- # nvmfpid=160175 00:10:07.278 23:23:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:07.278 23:23:28 -- nvmf/common.sh@470 -- # waitforlisten 160175 00:10:07.278 23:23:28 -- common/autotest_common.sh@819 -- # '[' -z 160175 ']' 00:10:07.278 23:23:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.278 23:23:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:07.278 23:23:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.278 23:23:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:07.278 23:23:28 -- common/autotest_common.sh@10 -- # set +x 00:10:07.278 [2024-07-11 23:23:28.122221] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:07.278 [2024-07-11 23:23:28.122322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.278 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.278 [2024-07-11 23:23:28.201829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.537 [2024-07-11 23:23:28.297056] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:07.537 [2024-07-11 23:23:28.297229] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.537 [2024-07-11 23:23:28.297251] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.537 [2024-07-11 23:23:28.297265] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.537 [2024-07-11 23:23:28.297321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.537 [2024-07-11 23:23:28.297375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.537 [2024-07-11 23:23:28.297423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.537 [2024-07-11 23:23:28.297426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.912 23:23:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:08.912 23:23:29 -- common/autotest_common.sh@852 -- # return 0 00:10:08.912 23:23:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:08.912 23:23:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 23:23:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.912 23:23:29 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 [2024-07-11 23:23:29.478086] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@26 -- # seq 1 4 00:10:08.912 23:23:29 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:08.912 23:23:29 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 Null1 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 [2024-07-11 23:23:29.518374] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:08.912 23:23:29 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 Null2 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:08.912 23:23:29 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 Null3 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:08.912 23:23:29 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 Null4 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:08.912 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.912 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.912 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.912 23:23:29 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:10:08.912 00:10:08.912 Discovery Log Number of Records 6, Generation counter 6 00:10:08.912 =====Discovery Log Entry 0====== 00:10:08.912 trtype: tcp 00:10:08.912 adrfam: ipv4 00:10:08.912 subtype: current discovery subsystem 00:10:08.912 treq: not required 00:10:08.912 portid: 0 00:10:08.912 trsvcid: 4420 00:10:08.912 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:08.912 traddr: 10.0.0.2 00:10:08.912 eflags: explicit discovery connections, duplicate discovery information 00:10:08.912 sectype: none 00:10:08.912 =====Discovery Log Entry 1====== 00:10:08.912 trtype: tcp 00:10:08.912 adrfam: ipv4 00:10:08.912 subtype: nvme subsystem 00:10:08.912 treq: not required 00:10:08.912 portid: 0 00:10:08.912 trsvcid: 4420 00:10:08.912 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:08.912 traddr: 10.0.0.2 00:10:08.912 eflags: none 00:10:08.912 sectype: none 00:10:08.912 =====Discovery Log Entry 2====== 00:10:08.912 trtype: tcp 00:10:08.912 adrfam: ipv4 00:10:08.912 subtype: nvme subsystem 00:10:08.912 treq: not required 00:10:08.912 portid: 0 00:10:08.912 trsvcid: 4420 00:10:08.912 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:08.912 traddr: 10.0.0.2 00:10:08.912 eflags: none 00:10:08.912 sectype: none 00:10:08.912 =====Discovery Log Entry 3====== 00:10:08.912 trtype: tcp 00:10:08.912 adrfam: ipv4 00:10:08.912 subtype: nvme subsystem 00:10:08.912 treq: not required 00:10:08.912 portid: 0 00:10:08.912 trsvcid: 4420 00:10:08.912 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:08.912 traddr: 10.0.0.2 00:10:08.912 eflags: none 00:10:08.913 sectype: none 00:10:08.913 =====Discovery Log Entry 4====== 00:10:08.913 trtype: tcp 00:10:08.913 adrfam: ipv4 00:10:08.913 subtype: nvme subsystem 00:10:08.913 treq: not required 00:10:08.913 portid: 0 00:10:08.913 trsvcid: 4420 00:10:08.913 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:08.913 traddr: 10.0.0.2 00:10:08.913 eflags: none 00:10:08.913 sectype: none 00:10:08.913 =====Discovery Log Entry 5====== 00:10:08.913 trtype: tcp 00:10:08.913 adrfam: ipv4 00:10:08.913 subtype: discovery subsystem referral 00:10:08.913 treq: not required 00:10:08.913 portid: 0 00:10:08.913 trsvcid: 4430 00:10:08.913 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:08.913 traddr: 10.0.0.2 00:10:08.913 eflags: none 00:10:08.913 sectype: none 00:10:08.913 23:23:29 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:08.913 Perform nvmf subsystem discovery via RPC 00:10:08.913 23:23:29 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:08.913 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.913 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.913 [2024-07-11 23:23:29.847290] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:10:08.913 [ 00:10:08.913 { 00:10:08.913 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:08.913 "subtype": "Discovery", 00:10:08.913 "listen_addresses": [ 00:10:08.913 { 00:10:08.913 "transport": "TCP", 00:10:08.913 "trtype": "TCP", 00:10:08.913 "adrfam": "IPv4", 00:10:08.913 "traddr": "10.0.0.2", 00:10:08.913 "trsvcid": "4420" 00:10:08.913 } 00:10:08.913 ], 00:10:08.913 "allow_any_host": true, 00:10:08.913 "hosts": [] 00:10:08.913 }, 00:10:08.913 { 00:10:08.913 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.913 "subtype": "NVMe", 00:10:08.913 "listen_addresses": [ 00:10:08.913 { 00:10:08.913 "transport": "TCP", 00:10:08.913 "trtype": "TCP", 00:10:08.913 "adrfam": "IPv4", 00:10:08.913 "traddr": "10.0.0.2", 00:10:08.913 "trsvcid": "4420" 00:10:08.913 } 00:10:08.913 ], 00:10:08.913 "allow_any_host": true, 00:10:08.913 "hosts": [], 00:10:08.913 "serial_number": "SPDK00000000000001", 00:10:08.913 "model_number": "SPDK bdev Controller", 00:10:08.913 "max_namespaces": 32, 00:10:08.913 "min_cntlid": 1, 00:10:08.913 "max_cntlid": 65519, 00:10:08.913 "namespaces": [ 00:10:08.913 { 00:10:08.913 "nsid": 1, 00:10:08.913 "bdev_name": "Null1", 00:10:08.913 "name": "Null1", 00:10:08.913 "nguid": "D58628F0476B40DDAF49444849311ED6", 00:10:08.913 "uuid": "d58628f0-476b-40dd-af49-444849311ed6" 00:10:08.913 } 00:10:08.913 ] 00:10:08.913 }, 00:10:08.913 { 00:10:08.913 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:08.913 "subtype": "NVMe", 00:10:08.913 "listen_addresses": [ 00:10:08.913 { 00:10:08.913 "transport": "TCP", 00:10:08.913 "trtype": "TCP", 00:10:08.913 "adrfam": "IPv4", 00:10:08.913 "traddr": "10.0.0.2", 00:10:08.913 "trsvcid": "4420" 00:10:08.913 } 00:10:08.913 ], 00:10:08.913 "allow_any_host": true, 00:10:08.913 "hosts": [], 00:10:08.913 "serial_number": "SPDK00000000000002", 00:10:08.913 "model_number": "SPDK bdev Controller", 00:10:08.913 "max_namespaces": 32, 00:10:08.913 "min_cntlid": 1, 00:10:08.913 "max_cntlid": 65519, 00:10:08.913 "namespaces": [ 00:10:08.913 { 00:10:08.913 "nsid": 1, 00:10:08.913 "bdev_name": "Null2", 00:10:08.913 "name": "Null2", 00:10:08.913 "nguid": "49EAEED360B84659944A61019955378A", 00:10:08.913 "uuid": "49eaeed3-60b8-4659-944a-61019955378a" 00:10:08.913 } 00:10:08.913 ] 00:10:08.913 }, 00:10:08.913 { 00:10:08.913 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:08.913 "subtype": "NVMe", 00:10:08.913 "listen_addresses": [ 00:10:08.913 { 00:10:08.913 "transport": "TCP", 00:10:08.913 "trtype": "TCP", 00:10:08.913 "adrfam": "IPv4", 00:10:08.913 "traddr": "10.0.0.2", 00:10:08.913 "trsvcid": "4420" 00:10:08.913 } 00:10:08.913 ], 00:10:08.913 "allow_any_host": true, 00:10:08.913 "hosts": [], 00:10:08.913 "serial_number": "SPDK00000000000003", 00:10:08.913 "model_number": "SPDK bdev Controller", 00:10:08.913 "max_namespaces": 32, 00:10:08.913 "min_cntlid": 1, 00:10:08.913 "max_cntlid": 65519, 00:10:08.913 "namespaces": [ 00:10:08.913 { 00:10:08.913 "nsid": 1, 00:10:08.913 "bdev_name": "Null3", 00:10:08.913 "name": "Null3", 00:10:08.913 "nguid": "26886C25950D4F66A2BEF74F0F4E8F2D", 00:10:08.913 "uuid": "26886c25-950d-4f66-a2be-f74f0f4e8f2d" 00:10:08.913 } 00:10:08.913 ] 00:10:08.913 }, 00:10:08.913 { 00:10:08.913 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:08.913 "subtype": "NVMe", 00:10:08.913 "listen_addresses": [ 00:10:08.913 { 00:10:08.913 "transport": "TCP", 00:10:08.913 "trtype": "TCP", 00:10:08.913 "adrfam": "IPv4", 00:10:08.913 "traddr": "10.0.0.2", 00:10:08.913 "trsvcid": "4420" 00:10:08.913 } 00:10:08.913 ], 00:10:08.913 "allow_any_host": true, 00:10:08.913 "hosts": [], 00:10:08.913 "serial_number": "SPDK00000000000004", 00:10:08.913 "model_number": "SPDK bdev Controller", 00:10:08.913 "max_namespaces": 32, 00:10:08.913 "min_cntlid": 1, 00:10:08.913 "max_cntlid": 65519, 00:10:08.913 "namespaces": [ 00:10:08.913 { 00:10:08.913 "nsid": 1, 00:10:08.913 "bdev_name": "Null4", 00:10:08.913 "name": "Null4", 00:10:08.913 "nguid": "837B66D669364516BE912C6267DF3A7F", 00:10:08.913 "uuid": "837b66d6-6936-4516-be91-2c6267df3a7f" 00:10:08.913 } 00:10:08.913 ] 00:10:08.913 } 00:10:08.913 ] 00:10:08.913 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:09.170 23:23:29 -- target/discovery.sh@42 -- # seq 1 4 00:10:09.170 23:23:29 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:09.171 23:23:29 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.171 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:09.171 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:09.171 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:09.171 23:23:29 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:09.171 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:09.171 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:09.171 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:09.171 23:23:29 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:09.171 23:23:29 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:09.171 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:09.171 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:09.171 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:09.171 23:23:29 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:09.171 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:09.171 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:09.171 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:09.171 23:23:29 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:09.171 23:23:29 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:09.171 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:09.171 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:09.171 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:09.171 23:23:29 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:09.171 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:09.171 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:09.171 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:09.171 23:23:29 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:09.171 23:23:29 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:09.171 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:09.171 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:09.171 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:09.171 23:23:29 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:09.171 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:09.171 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:09.171 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:09.171 23:23:29 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:09.171 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:09.171 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:09.171 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:09.171 23:23:29 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:09.171 23:23:29 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:09.171 23:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:09.171 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:10:09.171 23:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:09.171 23:23:29 -- target/discovery.sh@49 -- # check_bdevs= 00:10:09.171 23:23:29 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:09.171 23:23:29 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:09.171 23:23:29 -- target/discovery.sh@57 -- # nvmftestfini 00:10:09.171 23:23:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:09.171 23:23:29 -- nvmf/common.sh@116 -- # sync 00:10:09.171 23:23:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:09.171 23:23:29 -- nvmf/common.sh@119 -- # set +e 00:10:09.171 23:23:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:09.171 23:23:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:09.171 rmmod nvme_tcp 00:10:09.171 rmmod nvme_fabrics 00:10:09.171 rmmod nvme_keyring 00:10:09.171 23:23:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:09.171 23:23:30 -- nvmf/common.sh@123 -- # set -e 00:10:09.171 23:23:30 -- nvmf/common.sh@124 -- # return 0 00:10:09.171 23:23:30 -- nvmf/common.sh@477 -- # '[' -n 160175 ']' 00:10:09.171 23:23:30 -- nvmf/common.sh@478 -- # killprocess 160175 00:10:09.171 23:23:30 -- common/autotest_common.sh@926 -- # '[' -z 160175 ']' 00:10:09.171 23:23:30 -- common/autotest_common.sh@930 -- # kill -0 160175 00:10:09.171 23:23:30 -- common/autotest_common.sh@931 -- # uname 00:10:09.171 23:23:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:09.171 23:23:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 160175 00:10:09.171 23:23:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:09.171 23:23:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:09.171 23:23:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 160175' 00:10:09.171 killing process with pid 160175 00:10:09.171 23:23:30 -- common/autotest_common.sh@945 -- # kill 160175 00:10:09.171 [2024-07-11 23:23:30.060377] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:10:09.171 23:23:30 -- common/autotest_common.sh@950 -- # wait 160175 00:10:09.430 23:23:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:09.430 23:23:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:09.430 23:23:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:09.430 23:23:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.430 23:23:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:09.430 23:23:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.430 23:23:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.430 23:23:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.964 23:23:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:10:11.964 00:10:11.964 real 0m7.152s 00:10:11.964 user 0m9.372s 00:10:11.964 sys 0m2.467s 00:10:11.964 23:23:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.964 23:23:32 -- common/autotest_common.sh@10 -- # set +x 00:10:11.964 ************************************ 00:10:11.964 END TEST nvmf_discovery 00:10:11.964 ************************************ 00:10:11.964 23:23:32 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:11.964 23:23:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:11.964 23:23:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:11.964 23:23:32 -- common/autotest_common.sh@10 -- # set +x 00:10:11.964 ************************************ 00:10:11.964 START TEST nvmf_referrals 00:10:11.964 ************************************ 00:10:11.964 23:23:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:11.964 * Looking for test storage... 00:10:11.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.964 23:23:32 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.964 23:23:32 -- nvmf/common.sh@7 -- # uname -s 00:10:11.964 23:23:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.964 23:23:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.964 23:23:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.964 23:23:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.965 23:23:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.965 23:23:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.965 23:23:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.965 23:23:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.965 23:23:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.965 23:23:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.965 23:23:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:11.965 23:23:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:11.965 23:23:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.965 23:23:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.965 23:23:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.965 23:23:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.965 23:23:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.965 23:23:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.965 23:23:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.965 23:23:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.965 23:23:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.965 23:23:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.965 23:23:32 -- paths/export.sh@5 -- # export PATH 00:10:11.965 23:23:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.965 23:23:32 -- nvmf/common.sh@46 -- # : 0 00:10:11.965 23:23:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:11.965 23:23:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:11.965 23:23:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:11.965 23:23:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.965 23:23:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.965 23:23:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:11.965 23:23:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:11.965 23:23:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:11.965 23:23:32 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:11.965 23:23:32 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:11.965 23:23:32 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:11.965 23:23:32 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:11.965 23:23:32 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:11.965 23:23:32 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:11.965 23:23:32 -- target/referrals.sh@37 -- # nvmftestinit 00:10:11.965 23:23:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:11.965 23:23:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.965 23:23:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:11.965 23:23:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:11.965 23:23:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:11.965 23:23:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.965 23:23:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.965 23:23:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.965 23:23:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:10:11.965 23:23:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:10:11.965 23:23:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:10:11.965 23:23:32 -- common/autotest_common.sh@10 -- # set +x 00:10:14.500 23:23:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:14.500 23:23:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:10:14.500 23:23:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:10:14.500 23:23:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:10:14.500 23:23:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:10:14.500 23:23:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:10:14.500 23:23:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:10:14.500 23:23:35 -- nvmf/common.sh@294 -- # net_devs=() 00:10:14.500 23:23:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:10:14.500 23:23:35 -- nvmf/common.sh@295 -- # e810=() 00:10:14.500 23:23:35 -- nvmf/common.sh@295 -- # local -ga e810 00:10:14.500 23:23:35 -- nvmf/common.sh@296 -- # x722=() 00:10:14.500 23:23:35 -- nvmf/common.sh@296 -- # local -ga x722 00:10:14.500 23:23:35 -- nvmf/common.sh@297 -- # mlx=() 00:10:14.500 23:23:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:10:14.500 23:23:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.500 23:23:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.500 23:23:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.500 23:23:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.500 23:23:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.500 23:23:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.500 23:23:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.500 23:23:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.500 23:23:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.500 23:23:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.500 23:23:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.500 23:23:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:10:14.500 23:23:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:10:14.501 23:23:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:10:14.501 23:23:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:14.501 23:23:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:14.501 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:14.501 23:23:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:14.501 23:23:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:14.501 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:14.501 23:23:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:10:14.501 23:23:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:14.501 23:23:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.501 23:23:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:14.501 23:23:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.501 23:23:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:14.501 Found net devices under 0000:84:00.0: cvl_0_0 00:10:14.501 23:23:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.501 23:23:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:14.501 23:23:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.501 23:23:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:14.501 23:23:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.501 23:23:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:14.501 Found net devices under 0000:84:00.1: cvl_0_1 00:10:14.501 23:23:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.501 23:23:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:10:14.501 23:23:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:10:14.501 23:23:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:10:14.501 23:23:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.501 23:23:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.501 23:23:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.501 23:23:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:10:14.501 23:23:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.501 23:23:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.501 23:23:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:10:14.501 23:23:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.501 23:23:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.501 23:23:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:10:14.501 23:23:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:10:14.501 23:23:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.501 23:23:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.501 23:23:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.501 23:23:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.501 23:23:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:10:14.501 23:23:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.501 23:23:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.501 23:23:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.501 23:23:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:10:14.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:10:14.501 00:10:14.501 --- 10.0.0.2 ping statistics --- 00:10:14.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.501 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:10:14.501 23:23:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:10:14.501 00:10:14.501 --- 10.0.0.1 ping statistics --- 00:10:14.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.501 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:10:14.501 23:23:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.501 23:23:35 -- nvmf/common.sh@410 -- # return 0 00:10:14.501 23:23:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:14.501 23:23:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.501 23:23:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:14.501 23:23:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.501 23:23:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:14.501 23:23:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:14.501 23:23:35 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:14.501 23:23:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:14.501 23:23:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:14.501 23:23:35 -- common/autotest_common.sh@10 -- # set +x 00:10:14.501 23:23:35 -- nvmf/common.sh@469 -- # nvmfpid=162443 00:10:14.501 23:23:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:14.501 23:23:35 -- nvmf/common.sh@470 -- # waitforlisten 162443 00:10:14.501 23:23:35 -- common/autotest_common.sh@819 -- # '[' -z 162443 ']' 00:10:14.501 23:23:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.501 23:23:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:14.501 23:23:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.501 23:23:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:14.501 23:23:35 -- common/autotest_common.sh@10 -- # set +x 00:10:14.501 [2024-07-11 23:23:35.437534] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:14.501 [2024-07-11 23:23:35.437627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.760 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.760 [2024-07-11 23:23:35.519760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.760 [2024-07-11 23:23:35.615555] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:14.760 [2024-07-11 23:23:35.615735] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.760 [2024-07-11 23:23:35.615754] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.760 [2024-07-11 23:23:35.615769] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.760 [2024-07-11 23:23:35.615862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.760 [2024-07-11 23:23:35.615957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.760 [2024-07-11 23:23:35.616009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.760 [2024-07-11 23:23:35.616012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.693 23:23:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:15.693 23:23:36 -- common/autotest_common.sh@852 -- # return 0 00:10:15.693 23:23:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:15.693 23:23:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:15.693 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:10:15.693 23:23:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.693 23:23:36 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.693 23:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.693 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:10:15.693 [2024-07-11 23:23:36.548033] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.693 23:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.693 23:23:36 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:15.693 23:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.693 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:10:15.693 [2024-07-11 23:23:36.560275] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:15.693 23:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.693 23:23:36 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:15.693 23:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.693 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:10:15.693 23:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.693 23:23:36 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:15.693 23:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.693 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:10:15.693 23:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.693 23:23:36 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:15.693 23:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.693 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:10:15.693 23:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.693 23:23:36 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:15.693 23:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.693 23:23:36 -- target/referrals.sh@48 -- # jq length 00:10:15.693 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:10:15.693 23:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.693 23:23:36 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:15.693 23:23:36 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:15.693 23:23:36 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:15.693 23:23:36 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:15.693 23:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.693 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:10:15.693 23:23:36 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:15.693 23:23:36 -- target/referrals.sh@21 -- # sort 00:10:15.693 23:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.951 23:23:36 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:15.951 23:23:36 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:15.951 23:23:36 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:15.951 23:23:36 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:15.951 23:23:36 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:15.951 23:23:36 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:15.951 23:23:36 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:15.951 23:23:36 -- target/referrals.sh@26 -- # sort 00:10:15.951 23:23:36 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:15.951 23:23:36 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:15.951 23:23:36 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:15.951 23:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.951 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:10:15.951 23:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.951 23:23:36 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:15.951 23:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.951 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:10:15.951 23:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.951 23:23:36 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:15.951 23:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.951 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:10:15.951 23:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.951 23:23:36 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:15.951 23:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:15.951 23:23:36 -- target/referrals.sh@56 -- # jq length 00:10:15.951 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:10:15.951 23:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:15.951 23:23:36 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:15.951 23:23:36 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:15.951 23:23:36 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:15.951 23:23:36 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:15.951 23:23:36 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:15.951 23:23:36 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:15.951 23:23:36 -- target/referrals.sh@26 -- # sort 00:10:16.209 23:23:36 -- target/referrals.sh@26 -- # echo 00:10:16.209 23:23:36 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:16.209 23:23:36 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:16.209 23:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:16.209 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:10:16.209 23:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:16.209 23:23:37 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:16.209 23:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:16.209 23:23:37 -- common/autotest_common.sh@10 -- # set +x 00:10:16.209 23:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:16.209 23:23:37 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:16.209 23:23:37 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:16.209 23:23:37 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:16.209 23:23:37 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:16.209 23:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:16.209 23:23:37 -- common/autotest_common.sh@10 -- # set +x 00:10:16.209 23:23:37 -- target/referrals.sh@21 -- # sort 00:10:16.209 23:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:16.209 23:23:37 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:16.209 23:23:37 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:16.209 23:23:37 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:16.209 23:23:37 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:16.209 23:23:37 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:16.209 23:23:37 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:16.209 23:23:37 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:16.209 23:23:37 -- target/referrals.sh@26 -- # sort 00:10:16.466 23:23:37 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:16.466 23:23:37 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:16.466 23:23:37 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:16.466 23:23:37 -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:16.466 23:23:37 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:16.466 23:23:37 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:16.466 23:23:37 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:16.466 23:23:37 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:16.466 23:23:37 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:16.466 23:23:37 -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:16.466 23:23:37 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:16.466 23:23:37 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:16.466 23:23:37 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:16.724 23:23:37 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:16.724 23:23:37 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:16.724 23:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:16.724 23:23:37 -- common/autotest_common.sh@10 -- # set +x 00:10:16.724 23:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:16.724 23:23:37 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:16.724 23:23:37 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:16.724 23:23:37 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:16.724 23:23:37 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:16.724 23:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:16.724 23:23:37 -- target/referrals.sh@21 -- # sort 00:10:16.724 23:23:37 -- common/autotest_common.sh@10 -- # set +x 00:10:16.724 23:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:16.724 23:23:37 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:16.724 23:23:37 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:16.724 23:23:37 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:16.724 23:23:37 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:16.724 23:23:37 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:16.724 23:23:37 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:16.724 23:23:37 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:16.724 23:23:37 -- target/referrals.sh@26 -- # sort 00:10:16.981 23:23:37 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:16.982 23:23:37 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:16.982 23:23:37 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:16.982 23:23:37 -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:16.982 23:23:37 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:16.982 23:23:37 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:16.982 23:23:37 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:16.982 23:23:37 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:16.982 23:23:37 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:16.982 23:23:37 -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:16.982 23:23:37 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:16.982 23:23:37 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:16.982 23:23:37 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:17.239 23:23:37 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:17.239 23:23:37 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:17.239 23:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.239 23:23:37 -- common/autotest_common.sh@10 -- # set +x 00:10:17.239 23:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.239 23:23:37 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:17.239 23:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.239 23:23:37 -- target/referrals.sh@82 -- # jq length 00:10:17.239 23:23:37 -- common/autotest_common.sh@10 -- # set +x 00:10:17.239 23:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.239 23:23:38 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:17.239 23:23:38 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:17.239 23:23:38 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:17.239 23:23:38 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:17.239 23:23:38 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:17.239 23:23:38 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:17.239 23:23:38 -- target/referrals.sh@26 -- # sort 00:10:17.239 23:23:38 -- target/referrals.sh@26 -- # echo 00:10:17.239 23:23:38 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:17.239 23:23:38 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:17.239 23:23:38 -- target/referrals.sh@86 -- # nvmftestfini 00:10:17.239 23:23:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:17.239 23:23:38 -- nvmf/common.sh@116 -- # sync 00:10:17.239 23:23:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:17.239 23:23:38 -- nvmf/common.sh@119 -- # set +e 00:10:17.239 23:23:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:17.239 23:23:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:17.239 rmmod nvme_tcp 00:10:17.239 rmmod nvme_fabrics 00:10:17.239 rmmod nvme_keyring 00:10:17.497 23:23:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:17.497 23:23:38 -- nvmf/common.sh@123 -- # set -e 00:10:17.498 23:23:38 -- nvmf/common.sh@124 -- # return 0 00:10:17.498 23:23:38 -- nvmf/common.sh@477 -- # '[' -n 162443 ']' 00:10:17.498 23:23:38 -- nvmf/common.sh@478 -- # killprocess 162443 00:10:17.498 23:23:38 -- common/autotest_common.sh@926 -- # '[' -z 162443 ']' 00:10:17.498 23:23:38 -- common/autotest_common.sh@930 -- # kill -0 162443 00:10:17.498 23:23:38 -- common/autotest_common.sh@931 -- # uname 00:10:17.498 23:23:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:17.498 23:23:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 162443 00:10:17.498 23:23:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:17.498 23:23:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:17.498 23:23:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 162443' 00:10:17.498 killing process with pid 162443 00:10:17.498 23:23:38 -- common/autotest_common.sh@945 -- # kill 162443 00:10:17.498 23:23:38 -- common/autotest_common.sh@950 -- # wait 162443 00:10:17.757 23:23:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:17.757 23:23:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:17.757 23:23:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:17.757 23:23:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:17.757 23:23:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:17.758 23:23:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.758 23:23:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.758 23:23:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.689 23:23:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:10:19.689 00:10:19.689 real 0m8.201s 00:10:19.689 user 0m13.354s 00:10:19.689 sys 0m2.935s 00:10:19.689 23:23:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.689 23:23:40 -- common/autotest_common.sh@10 -- # set +x 00:10:19.689 ************************************ 00:10:19.689 END TEST nvmf_referrals 00:10:19.689 ************************************ 00:10:19.689 23:23:40 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:19.689 23:23:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:19.689 23:23:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:19.689 23:23:40 -- common/autotest_common.sh@10 -- # set +x 00:10:19.689 ************************************ 00:10:19.689 START TEST nvmf_connect_disconnect 00:10:19.689 ************************************ 00:10:19.689 23:23:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:19.949 * Looking for test storage... 00:10:19.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.949 23:23:40 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.949 23:23:40 -- nvmf/common.sh@7 -- # uname -s 00:10:19.949 23:23:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.949 23:23:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.949 23:23:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.949 23:23:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.949 23:23:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.949 23:23:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.949 23:23:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.949 23:23:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.949 23:23:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.949 23:23:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.949 23:23:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:19.949 23:23:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:19.949 23:23:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.949 23:23:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.949 23:23:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.949 23:23:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.949 23:23:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.949 23:23:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.949 23:23:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.949 23:23:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.949 23:23:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.949 23:23:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.949 23:23:40 -- paths/export.sh@5 -- # export PATH 00:10:19.949 23:23:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.949 23:23:40 -- nvmf/common.sh@46 -- # : 0 00:10:19.949 23:23:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:19.949 23:23:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:19.949 23:23:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:19.949 23:23:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.949 23:23:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.949 23:23:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:19.949 23:23:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:19.949 23:23:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:19.949 23:23:40 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:19.949 23:23:40 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:19.949 23:23:40 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:19.949 23:23:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:19.949 23:23:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.949 23:23:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:19.949 23:23:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:19.949 23:23:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:19.949 23:23:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.949 23:23:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.949 23:23:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.949 23:23:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:10:19.949 23:23:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:10:19.949 23:23:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:10:19.949 23:23:40 -- common/autotest_common.sh@10 -- # set +x 00:10:22.483 23:23:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:22.483 23:23:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:10:22.483 23:23:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:10:22.483 23:23:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:10:22.483 23:23:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:10:22.483 23:23:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:10:22.483 23:23:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:10:22.483 23:23:43 -- nvmf/common.sh@294 -- # net_devs=() 00:10:22.483 23:23:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:10:22.483 23:23:43 -- nvmf/common.sh@295 -- # e810=() 00:10:22.483 23:23:43 -- nvmf/common.sh@295 -- # local -ga e810 00:10:22.483 23:23:43 -- nvmf/common.sh@296 -- # x722=() 00:10:22.483 23:23:43 -- nvmf/common.sh@296 -- # local -ga x722 00:10:22.483 23:23:43 -- nvmf/common.sh@297 -- # mlx=() 00:10:22.483 23:23:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:10:22.483 23:23:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.483 23:23:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.483 23:23:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.483 23:23:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.483 23:23:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.483 23:23:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.483 23:23:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.483 23:23:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.483 23:23:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.483 23:23:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.483 23:23:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.483 23:23:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:10:22.483 23:23:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:10:22.483 23:23:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:10:22.483 23:23:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:22.483 23:23:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:22.483 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:22.483 23:23:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:22.483 23:23:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:22.483 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:22.483 23:23:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:10:22.483 23:23:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:22.483 23:23:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.483 23:23:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:22.483 23:23:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.483 23:23:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:22.483 Found net devices under 0000:84:00.0: cvl_0_0 00:10:22.483 23:23:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.483 23:23:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:22.483 23:23:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.483 23:23:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:22.483 23:23:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.483 23:23:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:22.483 Found net devices under 0000:84:00.1: cvl_0_1 00:10:22.483 23:23:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.483 23:23:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:10:22.483 23:23:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:10:22.483 23:23:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:10:22.483 23:23:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:10:22.483 23:23:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.483 23:23:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.483 23:23:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.483 23:23:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:10:22.483 23:23:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.483 23:23:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.483 23:23:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:10:22.483 23:23:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.483 23:23:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.483 23:23:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:10:22.483 23:23:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:10:22.483 23:23:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.483 23:23:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.483 23:23:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.483 23:23:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.741 23:23:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:10:22.741 23:23:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.741 23:23:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.741 23:23:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.741 23:23:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:10:22.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:10:22.741 00:10:22.741 --- 10.0.0.2 ping statistics --- 00:10:22.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.741 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:10:22.741 23:23:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:10:22.741 00:10:22.741 --- 10.0.0.1 ping statistics --- 00:10:22.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.741 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:10:22.741 23:23:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.741 23:23:43 -- nvmf/common.sh@410 -- # return 0 00:10:22.741 23:23:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:22.741 23:23:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.741 23:23:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:22.741 23:23:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:22.741 23:23:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.741 23:23:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:22.741 23:23:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:22.741 23:23:43 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:22.741 23:23:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:22.741 23:23:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:22.741 23:23:43 -- common/autotest_common.sh@10 -- # set +x 00:10:22.741 23:23:43 -- nvmf/common.sh@469 -- # nvmfpid=164911 00:10:22.741 23:23:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.741 23:23:43 -- nvmf/common.sh@470 -- # waitforlisten 164911 00:10:22.741 23:23:43 -- common/autotest_common.sh@819 -- # '[' -z 164911 ']' 00:10:22.741 23:23:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.741 23:23:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:22.741 23:23:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.741 23:23:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:22.741 23:23:43 -- common/autotest_common.sh@10 -- # set +x 00:10:22.741 [2024-07-11 23:23:43.614722] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:22.741 [2024-07-11 23:23:43.614809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.741 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.999 [2024-07-11 23:23:43.693487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.999 [2024-07-11 23:23:43.788224] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:22.999 [2024-07-11 23:23:43.788398] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.999 [2024-07-11 23:23:43.788418] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.999 [2024-07-11 23:23:43.788431] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.999 [2024-07-11 23:23:43.788510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.999 [2024-07-11 23:23:43.788579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.999 [2024-07-11 23:23:43.788643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.999 [2024-07-11 23:23:43.788646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.931 23:23:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:23.931 23:23:44 -- common/autotest_common.sh@852 -- # return 0 00:10:23.931 23:23:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:23.931 23:23:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:23.931 23:23:44 -- common/autotest_common.sh@10 -- # set +x 00:10:23.931 23:23:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.931 23:23:44 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:23.931 23:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:23.931 23:23:44 -- common/autotest_common.sh@10 -- # set +x 00:10:23.931 [2024-07-11 23:23:44.745187] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.931 23:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:23.931 23:23:44 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:23.931 23:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:23.931 23:23:44 -- common/autotest_common.sh@10 -- # set +x 00:10:23.931 23:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:23.931 23:23:44 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:23.931 23:23:44 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:23.931 23:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:23.931 23:23:44 -- common/autotest_common.sh@10 -- # set +x 00:10:23.931 23:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:23.931 23:23:44 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.931 23:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:23.931 23:23:44 -- common/autotest_common.sh@10 -- # set +x 00:10:23.931 23:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:23.931 23:23:44 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.931 23:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:23.931 23:23:44 -- common/autotest_common.sh@10 -- # set +x 00:10:23.931 [2024-07-11 23:23:44.806256] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.931 23:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:23.931 23:23:44 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:10:23.931 23:23:44 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:10:23.931 23:23:44 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:10:23.931 23:23:44 -- target/connect_disconnect.sh@34 -- # set +x 00:10:26.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.228 23:27:36 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:16.228 23:27:36 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:16.228 23:27:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:16.228 23:27:36 -- nvmf/common.sh@116 -- # sync 00:14:16.228 23:27:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:16.228 23:27:36 -- nvmf/common.sh@119 -- # set +e 00:14:16.228 23:27:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:16.228 23:27:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:16.228 rmmod nvme_tcp 00:14:16.228 rmmod nvme_fabrics 00:14:16.228 rmmod nvme_keyring 00:14:16.228 23:27:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:16.228 23:27:37 -- nvmf/common.sh@123 -- # set -e 00:14:16.228 23:27:37 -- nvmf/common.sh@124 -- # return 0 00:14:16.228 23:27:37 -- nvmf/common.sh@477 -- # '[' -n 164911 ']' 00:14:16.228 23:27:37 -- nvmf/common.sh@478 -- # killprocess 164911 00:14:16.228 23:27:37 -- common/autotest_common.sh@926 -- # '[' -z 164911 ']' 00:14:16.228 23:27:37 -- common/autotest_common.sh@930 -- # kill -0 164911 00:14:16.228 23:27:37 -- common/autotest_common.sh@931 -- # uname 00:14:16.228 23:27:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:16.228 23:27:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 164911 00:14:16.228 23:27:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:16.228 23:27:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:16.228 23:27:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 164911' 00:14:16.228 killing process with pid 164911 00:14:16.228 23:27:37 -- common/autotest_common.sh@945 -- # kill 164911 00:14:16.228 23:27:37 -- common/autotest_common.sh@950 -- # wait 164911 00:14:16.488 23:27:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:16.488 23:27:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:16.488 23:27:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:16.488 23:27:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:16.488 23:27:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:16.488 23:27:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.488 23:27:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.488 23:27:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.030 23:27:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:19.030 00:14:19.030 real 3m58.778s 00:14:19.030 user 15m7.337s 00:14:19.031 sys 0m36.573s 00:14:19.031 23:27:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.031 23:27:39 -- common/autotest_common.sh@10 -- # set +x 00:14:19.031 ************************************ 00:14:19.031 END TEST nvmf_connect_disconnect 00:14:19.031 ************************************ 00:14:19.031 23:27:39 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:19.031 23:27:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:19.031 23:27:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:19.031 23:27:39 -- common/autotest_common.sh@10 -- # set +x 00:14:19.031 ************************************ 00:14:19.031 START TEST nvmf_multitarget 00:14:19.031 ************************************ 00:14:19.031 23:27:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:19.031 * Looking for test storage... 00:14:19.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.031 23:27:39 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.031 23:27:39 -- nvmf/common.sh@7 -- # uname -s 00:14:19.031 23:27:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.031 23:27:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.031 23:27:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.031 23:27:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.031 23:27:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.031 23:27:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.031 23:27:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.031 23:27:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.031 23:27:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.031 23:27:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.031 23:27:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:19.031 23:27:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:19.031 23:27:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.031 23:27:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.031 23:27:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.031 23:27:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.031 23:27:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.031 23:27:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.031 23:27:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.031 23:27:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.031 23:27:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.031 23:27:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.031 23:27:39 -- paths/export.sh@5 -- # export PATH 00:14:19.031 23:27:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.031 23:27:39 -- nvmf/common.sh@46 -- # : 0 00:14:19.031 23:27:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:19.031 23:27:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:19.031 23:27:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:19.031 23:27:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.031 23:27:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.031 23:27:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:19.031 23:27:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:19.031 23:27:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:19.031 23:27:39 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:19.031 23:27:39 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:19.031 23:27:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:19.031 23:27:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.031 23:27:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:19.031 23:27:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:19.031 23:27:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:19.031 23:27:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.031 23:27:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.031 23:27:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.031 23:27:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:19.031 23:27:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:19.031 23:27:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:19.031 23:27:39 -- common/autotest_common.sh@10 -- # set +x 00:14:21.568 23:27:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:21.568 23:27:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:21.568 23:27:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:21.568 23:27:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:21.568 23:27:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:21.568 23:27:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:21.568 23:27:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:21.568 23:27:42 -- nvmf/common.sh@294 -- # net_devs=() 00:14:21.568 23:27:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:21.568 23:27:42 -- nvmf/common.sh@295 -- # e810=() 00:14:21.568 23:27:42 -- nvmf/common.sh@295 -- # local -ga e810 00:14:21.568 23:27:42 -- nvmf/common.sh@296 -- # x722=() 00:14:21.568 23:27:42 -- nvmf/common.sh@296 -- # local -ga x722 00:14:21.568 23:27:42 -- nvmf/common.sh@297 -- # mlx=() 00:14:21.568 23:27:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:21.568 23:27:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.568 23:27:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.568 23:27:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.568 23:27:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.568 23:27:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.568 23:27:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.568 23:27:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.568 23:27:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.568 23:27:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.568 23:27:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.568 23:27:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.568 23:27:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:21.568 23:27:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:21.568 23:27:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:21.568 23:27:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:21.568 23:27:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:21.568 23:27:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:21.568 23:27:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.568 23:27:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:21.568 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:21.568 23:27:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.569 23:27:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:21.569 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:21.569 23:27:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:21.569 23:27:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.569 23:27:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.569 23:27:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.569 23:27:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.569 23:27:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:21.569 Found net devices under 0000:84:00.0: cvl_0_0 00:14:21.569 23:27:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.569 23:27:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.569 23:27:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.569 23:27:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.569 23:27:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.569 23:27:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:21.569 Found net devices under 0000:84:00.1: cvl_0_1 00:14:21.569 23:27:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.569 23:27:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:21.569 23:27:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:21.569 23:27:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:21.569 23:27:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.569 23:27:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.569 23:27:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.569 23:27:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:21.569 23:27:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.569 23:27:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.569 23:27:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:21.569 23:27:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.569 23:27:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.569 23:27:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:21.569 23:27:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:21.569 23:27:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.569 23:27:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.569 23:27:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.569 23:27:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.569 23:27:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:21.569 23:27:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.569 23:27:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.569 23:27:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.569 23:27:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:21.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:14:21.569 00:14:21.569 --- 10.0.0.2 ping statistics --- 00:14:21.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.569 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:14:21.569 23:27:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:14:21.569 00:14:21.569 --- 10.0.0.1 ping statistics --- 00:14:21.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.569 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:14:21.569 23:27:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.569 23:27:42 -- nvmf/common.sh@410 -- # return 0 00:14:21.569 23:27:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:21.569 23:27:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.569 23:27:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:21.569 23:27:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.569 23:27:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:21.569 23:27:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:21.569 23:27:42 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:21.569 23:27:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:21.569 23:27:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:21.569 23:27:42 -- common/autotest_common.sh@10 -- # set +x 00:14:21.569 23:27:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:21.569 23:27:42 -- nvmf/common.sh@469 -- # nvmfpid=197219 00:14:21.569 23:27:42 -- nvmf/common.sh@470 -- # waitforlisten 197219 00:14:21.569 23:27:42 -- common/autotest_common.sh@819 -- # '[' -z 197219 ']' 00:14:21.569 23:27:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.569 23:27:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:21.569 23:27:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.569 23:27:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:21.569 23:27:42 -- common/autotest_common.sh@10 -- # set +x 00:14:21.569 [2024-07-11 23:27:42.462312] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:21.569 [2024-07-11 23:27:42.462423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.827 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.827 [2024-07-11 23:27:42.564682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:21.827 [2024-07-11 23:27:42.658152] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:21.827 [2024-07-11 23:27:42.658317] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.827 [2024-07-11 23:27:42.658337] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.827 [2024-07-11 23:27:42.658351] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.827 [2024-07-11 23:27:42.658409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.827 [2024-07-11 23:27:42.658457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.827 [2024-07-11 23:27:42.658523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.827 [2024-07-11 23:27:42.658526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.206 23:27:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:23.206 23:27:43 -- common/autotest_common.sh@852 -- # return 0 00:14:23.206 23:27:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:23.206 23:27:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:23.206 23:27:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.206 23:27:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.206 23:27:43 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:23.206 23:27:43 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:23.206 23:27:43 -- target/multitarget.sh@21 -- # jq length 00:14:23.206 23:27:43 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:23.206 23:27:43 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:23.465 "nvmf_tgt_1" 00:14:23.465 23:27:44 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:23.725 "nvmf_tgt_2" 00:14:23.725 23:27:44 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:23.725 23:27:44 -- target/multitarget.sh@28 -- # jq length 00:14:23.725 23:27:44 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:23.725 23:27:44 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:23.983 true 00:14:23.983 23:27:44 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:24.242 true 00:14:24.242 23:27:44 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:24.242 23:27:44 -- target/multitarget.sh@35 -- # jq length 00:14:24.242 23:27:45 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:24.242 23:27:45 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:24.242 23:27:45 -- target/multitarget.sh@41 -- # nvmftestfini 00:14:24.242 23:27:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:24.242 23:27:45 -- nvmf/common.sh@116 -- # sync 00:14:24.242 23:27:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:24.242 23:27:45 -- nvmf/common.sh@119 -- # set +e 00:14:24.242 23:27:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:24.242 23:27:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:24.242 rmmod nvme_tcp 00:14:24.242 rmmod nvme_fabrics 00:14:24.242 rmmod nvme_keyring 00:14:24.242 23:27:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:24.242 23:27:45 -- nvmf/common.sh@123 -- # set -e 00:14:24.242 23:27:45 -- nvmf/common.sh@124 -- # return 0 00:14:24.242 23:27:45 -- nvmf/common.sh@477 -- # '[' -n 197219 ']' 00:14:24.242 23:27:45 -- nvmf/common.sh@478 -- # killprocess 197219 00:14:24.242 23:27:45 -- common/autotest_common.sh@926 -- # '[' -z 197219 ']' 00:14:24.242 23:27:45 -- common/autotest_common.sh@930 -- # kill -0 197219 00:14:24.242 23:27:45 -- common/autotest_common.sh@931 -- # uname 00:14:24.242 23:27:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:24.242 23:27:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 197219 00:14:24.500 23:27:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:24.500 23:27:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:24.500 23:27:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 197219' 00:14:24.500 killing process with pid 197219 00:14:24.500 23:27:45 -- common/autotest_common.sh@945 -- # kill 197219 00:14:24.500 23:27:45 -- common/autotest_common.sh@950 -- # wait 197219 00:14:24.759 23:27:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:24.759 23:27:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:24.759 23:27:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:24.759 23:27:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:24.759 23:27:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:24.759 23:27:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.759 23:27:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.759 23:27:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.662 23:27:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:26.662 00:14:26.662 real 0m8.084s 00:14:26.662 user 0m13.701s 00:14:26.662 sys 0m2.763s 00:14:26.662 23:27:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.662 23:27:47 -- common/autotest_common.sh@10 -- # set +x 00:14:26.662 ************************************ 00:14:26.662 END TEST nvmf_multitarget 00:14:26.662 ************************************ 00:14:26.662 23:27:47 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:26.662 23:27:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:26.662 23:27:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:26.662 23:27:47 -- common/autotest_common.sh@10 -- # set +x 00:14:26.662 ************************************ 00:14:26.662 START TEST nvmf_rpc 00:14:26.662 ************************************ 00:14:26.662 23:27:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:26.662 * Looking for test storage... 00:14:26.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.662 23:27:47 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.662 23:27:47 -- nvmf/common.sh@7 -- # uname -s 00:14:26.939 23:27:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.939 23:27:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.939 23:27:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.939 23:27:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.939 23:27:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.939 23:27:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.939 23:27:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.939 23:27:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.939 23:27:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.939 23:27:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.939 23:27:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:26.939 23:27:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:26.939 23:27:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.939 23:27:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.939 23:27:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:26.939 23:27:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:26.939 23:27:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.939 23:27:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.939 23:27:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.939 23:27:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.939 23:27:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.939 23:27:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.939 23:27:47 -- paths/export.sh@5 -- # export PATH 00:14:26.939 23:27:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.939 23:27:47 -- nvmf/common.sh@46 -- # : 0 00:14:26.939 23:27:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:26.939 23:27:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:26.939 23:27:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:26.939 23:27:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.939 23:27:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.939 23:27:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:26.939 23:27:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:26.939 23:27:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:26.939 23:27:47 -- target/rpc.sh@11 -- # loops=5 00:14:26.939 23:27:47 -- target/rpc.sh@23 -- # nvmftestinit 00:14:26.939 23:27:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:26.939 23:27:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.939 23:27:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:26.939 23:27:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:26.939 23:27:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:26.939 23:27:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.939 23:27:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.939 23:27:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.939 23:27:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:26.939 23:27:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:26.939 23:27:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:26.939 23:27:47 -- common/autotest_common.sh@10 -- # set +x 00:14:29.491 23:27:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:29.491 23:27:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:29.491 23:27:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:29.491 23:27:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:29.491 23:27:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:29.491 23:27:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:29.491 23:27:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:29.491 23:27:50 -- nvmf/common.sh@294 -- # net_devs=() 00:14:29.491 23:27:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:29.491 23:27:50 -- nvmf/common.sh@295 -- # e810=() 00:14:29.491 23:27:50 -- nvmf/common.sh@295 -- # local -ga e810 00:14:29.491 23:27:50 -- nvmf/common.sh@296 -- # x722=() 00:14:29.491 23:27:50 -- nvmf/common.sh@296 -- # local -ga x722 00:14:29.491 23:27:50 -- nvmf/common.sh@297 -- # mlx=() 00:14:29.491 23:27:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:29.491 23:27:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.491 23:27:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.491 23:27:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.491 23:27:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.491 23:27:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.491 23:27:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.491 23:27:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.491 23:27:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.491 23:27:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.491 23:27:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.491 23:27:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.491 23:27:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:29.491 23:27:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:29.491 23:27:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:29.491 23:27:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:29.491 23:27:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:29.491 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:29.491 23:27:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:29.491 23:27:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:29.491 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:29.491 23:27:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:29.491 23:27:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:29.491 23:27:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.491 23:27:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:29.491 23:27:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.491 23:27:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:29.491 Found net devices under 0000:84:00.0: cvl_0_0 00:14:29.491 23:27:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.491 23:27:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:29.491 23:27:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.491 23:27:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:29.491 23:27:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.491 23:27:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:29.491 Found net devices under 0000:84:00.1: cvl_0_1 00:14:29.491 23:27:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.491 23:27:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:29.491 23:27:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:29.491 23:27:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:29.491 23:27:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:29.491 23:27:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.491 23:27:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.491 23:27:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.491 23:27:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:29.491 23:27:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.491 23:27:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.491 23:27:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:29.491 23:27:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.491 23:27:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.491 23:27:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:29.491 23:27:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:29.491 23:27:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.491 23:27:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.491 23:27:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.491 23:27:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.491 23:27:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:29.491 23:27:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.491 23:27:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.491 23:27:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.491 23:27:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:29.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:14:29.491 00:14:29.491 --- 10.0.0.2 ping statistics --- 00:14:29.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.492 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:14:29.492 23:27:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:14:29.492 00:14:29.492 --- 10.0.0.1 ping statistics --- 00:14:29.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.492 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:14:29.492 23:27:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.492 23:27:50 -- nvmf/common.sh@410 -- # return 0 00:14:29.492 23:27:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:29.492 23:27:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.492 23:27:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:29.492 23:27:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:29.492 23:27:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.492 23:27:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:29.492 23:27:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:29.492 23:27:50 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:29.492 23:27:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:29.492 23:27:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:29.492 23:27:50 -- common/autotest_common.sh@10 -- # set +x 00:14:29.492 23:27:50 -- nvmf/common.sh@469 -- # nvmfpid=199613 00:14:29.492 23:27:50 -- nvmf/common.sh@470 -- # waitforlisten 199613 00:14:29.492 23:27:50 -- common/autotest_common.sh@819 -- # '[' -z 199613 ']' 00:14:29.492 23:27:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.492 23:27:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.492 23:27:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:29.492 23:27:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.492 23:27:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:29.492 23:27:50 -- common/autotest_common.sh@10 -- # set +x 00:14:29.492 [2024-07-11 23:27:50.315625] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:29.492 [2024-07-11 23:27:50.315723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.492 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.492 [2024-07-11 23:27:50.393786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.749 [2024-07-11 23:27:50.487856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:29.749 [2024-07-11 23:27:50.488016] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.749 [2024-07-11 23:27:50.488036] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.749 [2024-07-11 23:27:50.488050] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.749 [2024-07-11 23:27:50.489181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.749 [2024-07-11 23:27:50.489218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.749 [2024-07-11 23:27:50.489270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.749 [2024-07-11 23:27:50.489274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.682 23:27:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:30.682 23:27:51 -- common/autotest_common.sh@852 -- # return 0 00:14:30.682 23:27:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:30.682 23:27:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:30.682 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:14:30.682 23:27:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.682 23:27:51 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:30.682 23:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.682 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:14:30.682 23:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.682 23:27:51 -- target/rpc.sh@26 -- # stats='{ 00:14:30.682 "tick_rate": 2700000000, 00:14:30.682 "poll_groups": [ 00:14:30.682 { 00:14:30.682 "name": "nvmf_tgt_poll_group_0", 00:14:30.682 "admin_qpairs": 0, 00:14:30.682 "io_qpairs": 0, 00:14:30.682 "current_admin_qpairs": 0, 00:14:30.682 "current_io_qpairs": 0, 00:14:30.682 "pending_bdev_io": 0, 00:14:30.682 "completed_nvme_io": 0, 00:14:30.682 "transports": [] 00:14:30.682 }, 00:14:30.682 { 00:14:30.682 "name": "nvmf_tgt_poll_group_1", 00:14:30.682 "admin_qpairs": 0, 00:14:30.682 "io_qpairs": 0, 00:14:30.682 "current_admin_qpairs": 0, 00:14:30.682 "current_io_qpairs": 0, 00:14:30.682 "pending_bdev_io": 0, 00:14:30.682 "completed_nvme_io": 0, 00:14:30.682 "transports": [] 00:14:30.682 }, 00:14:30.682 { 00:14:30.682 "name": "nvmf_tgt_poll_group_2", 00:14:30.682 "admin_qpairs": 0, 00:14:30.682 "io_qpairs": 0, 00:14:30.682 "current_admin_qpairs": 0, 00:14:30.682 "current_io_qpairs": 0, 00:14:30.682 "pending_bdev_io": 0, 00:14:30.682 "completed_nvme_io": 0, 00:14:30.682 "transports": [] 00:14:30.682 }, 00:14:30.682 { 00:14:30.682 "name": "nvmf_tgt_poll_group_3", 00:14:30.682 "admin_qpairs": 0, 00:14:30.682 "io_qpairs": 0, 00:14:30.682 "current_admin_qpairs": 0, 00:14:30.682 "current_io_qpairs": 0, 00:14:30.682 "pending_bdev_io": 0, 00:14:30.682 "completed_nvme_io": 0, 00:14:30.682 "transports": [] 00:14:30.683 } 00:14:30.683 ] 00:14:30.683 }' 00:14:30.683 23:27:51 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:30.683 23:27:51 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:30.683 23:27:51 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:30.683 23:27:51 -- target/rpc.sh@15 -- # wc -l 00:14:30.683 23:27:51 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:30.683 23:27:51 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:30.683 23:27:51 -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:30.683 23:27:51 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:30.683 23:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.683 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:14:30.683 [2024-07-11 23:27:51.516500] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.683 23:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.683 23:27:51 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:30.683 23:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.683 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:14:30.683 23:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.683 23:27:51 -- target/rpc.sh@33 -- # stats='{ 00:14:30.683 "tick_rate": 2700000000, 00:14:30.683 "poll_groups": [ 00:14:30.683 { 00:14:30.683 "name": "nvmf_tgt_poll_group_0", 00:14:30.683 "admin_qpairs": 0, 00:14:30.683 "io_qpairs": 0, 00:14:30.683 "current_admin_qpairs": 0, 00:14:30.683 "current_io_qpairs": 0, 00:14:30.683 "pending_bdev_io": 0, 00:14:30.683 "completed_nvme_io": 0, 00:14:30.683 "transports": [ 00:14:30.683 { 00:14:30.683 "trtype": "TCP" 00:14:30.683 } 00:14:30.683 ] 00:14:30.683 }, 00:14:30.683 { 00:14:30.683 "name": "nvmf_tgt_poll_group_1", 00:14:30.683 "admin_qpairs": 0, 00:14:30.683 "io_qpairs": 0, 00:14:30.683 "current_admin_qpairs": 0, 00:14:30.683 "current_io_qpairs": 0, 00:14:30.683 "pending_bdev_io": 0, 00:14:30.683 "completed_nvme_io": 0, 00:14:30.683 "transports": [ 00:14:30.683 { 00:14:30.683 "trtype": "TCP" 00:14:30.683 } 00:14:30.683 ] 00:14:30.683 }, 00:14:30.683 { 00:14:30.683 "name": "nvmf_tgt_poll_group_2", 00:14:30.683 "admin_qpairs": 0, 00:14:30.683 "io_qpairs": 0, 00:14:30.683 "current_admin_qpairs": 0, 00:14:30.683 "current_io_qpairs": 0, 00:14:30.683 "pending_bdev_io": 0, 00:14:30.683 "completed_nvme_io": 0, 00:14:30.683 "transports": [ 00:14:30.683 { 00:14:30.683 "trtype": "TCP" 00:14:30.683 } 00:14:30.683 ] 00:14:30.683 }, 00:14:30.683 { 00:14:30.683 "name": "nvmf_tgt_poll_group_3", 00:14:30.683 "admin_qpairs": 0, 00:14:30.683 "io_qpairs": 0, 00:14:30.683 "current_admin_qpairs": 0, 00:14:30.683 "current_io_qpairs": 0, 00:14:30.683 "pending_bdev_io": 0, 00:14:30.683 "completed_nvme_io": 0, 00:14:30.683 "transports": [ 00:14:30.683 { 00:14:30.683 "trtype": "TCP" 00:14:30.683 } 00:14:30.683 ] 00:14:30.683 } 00:14:30.683 ] 00:14:30.683 }' 00:14:30.683 23:27:51 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:30.683 23:27:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:30.683 23:27:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:30.683 23:27:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:30.683 23:27:51 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:30.683 23:27:51 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:30.683 23:27:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:30.683 23:27:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:30.683 23:27:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:30.940 23:27:51 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:30.940 23:27:51 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:30.940 23:27:51 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:30.940 23:27:51 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:30.940 23:27:51 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:30.940 23:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.940 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:14:30.940 Malloc1 00:14:30.940 23:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.940 23:27:51 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:30.940 23:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.940 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:14:30.940 23:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.940 23:27:51 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:30.940 23:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.940 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:14:30.940 23:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.940 23:27:51 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:30.940 23:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.940 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:14:30.940 23:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.940 23:27:51 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.940 23:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.940 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:14:30.940 [2024-07-11 23:27:51.755088] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.940 23:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.940 23:27:51 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:14:30.940 23:27:51 -- common/autotest_common.sh@640 -- # local es=0 00:14:30.940 23:27:51 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:14:30.940 23:27:51 -- common/autotest_common.sh@628 -- # local arg=nvme 00:14:30.940 23:27:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:30.940 23:27:51 -- common/autotest_common.sh@632 -- # type -t nvme 00:14:30.940 23:27:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:30.941 23:27:51 -- common/autotest_common.sh@634 -- # type -P nvme 00:14:30.941 23:27:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:30.941 23:27:51 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:14:30.941 23:27:51 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:14:30.941 23:27:51 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:14:30.941 [2024-07-11 23:27:51.777669] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:14:30.941 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:30.941 could not add new controller: failed to write to nvme-fabrics device 00:14:30.941 23:27:51 -- common/autotest_common.sh@643 -- # es=1 00:14:30.941 23:27:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:30.941 23:27:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:30.941 23:27:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:30.941 23:27:51 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:30.941 23:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.941 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:14:30.941 23:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.941 23:27:51 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:31.508 23:27:52 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:31.508 23:27:52 -- common/autotest_common.sh@1177 -- # local i=0 00:14:31.508 23:27:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:31.508 23:27:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:31.508 23:27:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:34.045 23:27:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:34.045 23:27:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:34.045 23:27:54 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.045 23:27:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:34.045 23:27:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.045 23:27:54 -- common/autotest_common.sh@1187 -- # return 0 00:14:34.045 23:27:54 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.045 23:27:54 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:34.045 23:27:54 -- common/autotest_common.sh@1198 -- # local i=0 00:14:34.045 23:27:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:34.045 23:27:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.045 23:27:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:34.045 23:27:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.045 23:27:54 -- common/autotest_common.sh@1210 -- # return 0 00:14:34.045 23:27:54 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:34.045 23:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.045 23:27:54 -- common/autotest_common.sh@10 -- # set +x 00:14:34.045 23:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.045 23:27:54 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.045 23:27:54 -- common/autotest_common.sh@640 -- # local es=0 00:14:34.045 23:27:54 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.045 23:27:54 -- common/autotest_common.sh@628 -- # local arg=nvme 00:14:34.045 23:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:34.045 23:27:54 -- common/autotest_common.sh@632 -- # type -t nvme 00:14:34.045 23:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:34.045 23:27:54 -- common/autotest_common.sh@634 -- # type -P nvme 00:14:34.045 23:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:34.045 23:27:54 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:14:34.045 23:27:54 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:14:34.045 23:27:54 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.045 [2024-07-11 23:27:54.581783] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:14:34.045 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:34.045 could not add new controller: failed to write to nvme-fabrics device 00:14:34.045 23:27:54 -- common/autotest_common.sh@643 -- # es=1 00:14:34.045 23:27:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:34.045 23:27:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:34.045 23:27:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:34.045 23:27:54 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:34.045 23:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.045 23:27:54 -- common/autotest_common.sh@10 -- # set +x 00:14:34.045 23:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.045 23:27:54 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.614 23:27:55 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:34.614 23:27:55 -- common/autotest_common.sh@1177 -- # local i=0 00:14:34.614 23:27:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:34.614 23:27:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:34.614 23:27:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:36.519 23:27:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:36.519 23:27:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:36.519 23:27:57 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:36.519 23:27:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:36.519 23:27:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:36.519 23:27:57 -- common/autotest_common.sh@1187 -- # return 0 00:14:36.519 23:27:57 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:36.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.519 23:27:57 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:36.519 23:27:57 -- common/autotest_common.sh@1198 -- # local i=0 00:14:36.519 23:27:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:36.519 23:27:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.519 23:27:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:36.519 23:27:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.519 23:27:57 -- common/autotest_common.sh@1210 -- # return 0 00:14:36.519 23:27:57 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.519 23:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.519 23:27:57 -- common/autotest_common.sh@10 -- # set +x 00:14:36.519 23:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.519 23:27:57 -- target/rpc.sh@81 -- # seq 1 5 00:14:36.519 23:27:57 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:36.519 23:27:57 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:36.519 23:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.519 23:27:57 -- common/autotest_common.sh@10 -- # set +x 00:14:36.519 23:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.519 23:27:57 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.519 23:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.519 23:27:57 -- common/autotest_common.sh@10 -- # set +x 00:14:36.519 [2024-07-11 23:27:57.436195] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.519 23:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.519 23:27:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:36.519 23:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.519 23:27:57 -- common/autotest_common.sh@10 -- # set +x 00:14:36.519 23:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.519 23:27:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:36.519 23:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.519 23:27:57 -- common/autotest_common.sh@10 -- # set +x 00:14:36.519 23:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.519 23:27:57 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:37.476 23:27:58 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:37.476 23:27:58 -- common/autotest_common.sh@1177 -- # local i=0 00:14:37.476 23:27:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:37.476 23:27:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:37.476 23:27:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:39.388 23:28:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:39.388 23:28:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:39.388 23:28:00 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:39.388 23:28:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:39.388 23:28:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:39.388 23:28:00 -- common/autotest_common.sh@1187 -- # return 0 00:14:39.388 23:28:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.388 23:28:00 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:39.388 23:28:00 -- common/autotest_common.sh@1198 -- # local i=0 00:14:39.388 23:28:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:39.388 23:28:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.388 23:28:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:39.388 23:28:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.388 23:28:00 -- common/autotest_common.sh@1210 -- # return 0 00:14:39.388 23:28:00 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:39.388 23:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.388 23:28:00 -- common/autotest_common.sh@10 -- # set +x 00:14:39.388 23:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.388 23:28:00 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.388 23:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.388 23:28:00 -- common/autotest_common.sh@10 -- # set +x 00:14:39.388 23:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.388 23:28:00 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:39.388 23:28:00 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:39.388 23:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.388 23:28:00 -- common/autotest_common.sh@10 -- # set +x 00:14:39.388 23:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.388 23:28:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.388 23:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.388 23:28:00 -- common/autotest_common.sh@10 -- # set +x 00:14:39.388 [2024-07-11 23:28:00.187570] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.388 23:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.388 23:28:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:39.388 23:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.388 23:28:00 -- common/autotest_common.sh@10 -- # set +x 00:14:39.388 23:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.388 23:28:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:39.388 23:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.388 23:28:00 -- common/autotest_common.sh@10 -- # set +x 00:14:39.388 23:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.388 23:28:00 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:39.970 23:28:00 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:39.970 23:28:00 -- common/autotest_common.sh@1177 -- # local i=0 00:14:39.970 23:28:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:39.970 23:28:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:39.970 23:28:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:42.503 23:28:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:42.503 23:28:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:42.503 23:28:02 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.503 23:28:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:42.503 23:28:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.503 23:28:02 -- common/autotest_common.sh@1187 -- # return 0 00:14:42.503 23:28:02 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:42.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.503 23:28:02 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:42.503 23:28:02 -- common/autotest_common.sh@1198 -- # local i=0 00:14:42.503 23:28:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:42.503 23:28:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:42.503 23:28:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:42.503 23:28:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:42.503 23:28:02 -- common/autotest_common.sh@1210 -- # return 0 00:14:42.503 23:28:02 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:42.503 23:28:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.503 23:28:02 -- common/autotest_common.sh@10 -- # set +x 00:14:42.503 23:28:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.503 23:28:02 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.503 23:28:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.503 23:28:02 -- common/autotest_common.sh@10 -- # set +x 00:14:42.503 23:28:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.503 23:28:02 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:42.503 23:28:02 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:42.503 23:28:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.503 23:28:02 -- common/autotest_common.sh@10 -- # set +x 00:14:42.503 23:28:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.503 23:28:02 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.503 23:28:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.503 23:28:02 -- common/autotest_common.sh@10 -- # set +x 00:14:42.503 [2024-07-11 23:28:02.986717] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.503 23:28:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.503 23:28:02 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:42.503 23:28:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.503 23:28:02 -- common/autotest_common.sh@10 -- # set +x 00:14:42.503 23:28:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.503 23:28:02 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:42.503 23:28:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.503 23:28:02 -- common/autotest_common.sh@10 -- # set +x 00:14:42.503 23:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.503 23:28:03 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:42.761 23:28:03 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:42.761 23:28:03 -- common/autotest_common.sh@1177 -- # local i=0 00:14:42.761 23:28:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.761 23:28:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:42.761 23:28:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:45.290 23:28:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:45.290 23:28:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:45.290 23:28:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.290 23:28:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:45.290 23:28:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.290 23:28:05 -- common/autotest_common.sh@1187 -- # return 0 00:14:45.290 23:28:05 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.290 23:28:05 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:45.290 23:28:05 -- common/autotest_common.sh@1198 -- # local i=0 00:14:45.290 23:28:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:45.290 23:28:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.290 23:28:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:45.290 23:28:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.290 23:28:05 -- common/autotest_common.sh@1210 -- # return 0 00:14:45.290 23:28:05 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:45.290 23:28:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.290 23:28:05 -- common/autotest_common.sh@10 -- # set +x 00:14:45.290 23:28:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.290 23:28:05 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.291 23:28:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.291 23:28:05 -- common/autotest_common.sh@10 -- # set +x 00:14:45.291 23:28:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.291 23:28:05 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:45.291 23:28:05 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:45.291 23:28:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.291 23:28:05 -- common/autotest_common.sh@10 -- # set +x 00:14:45.291 23:28:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.291 23:28:05 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.291 23:28:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.291 23:28:05 -- common/autotest_common.sh@10 -- # set +x 00:14:45.291 [2024-07-11 23:28:05.809531] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.291 23:28:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.291 23:28:05 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:45.291 23:28:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.291 23:28:05 -- common/autotest_common.sh@10 -- # set +x 00:14:45.291 23:28:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.291 23:28:05 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:45.291 23:28:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.291 23:28:05 -- common/autotest_common.sh@10 -- # set +x 00:14:45.291 23:28:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.291 23:28:05 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:45.548 23:28:06 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:45.548 23:28:06 -- common/autotest_common.sh@1177 -- # local i=0 00:14:45.548 23:28:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:45.548 23:28:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:45.548 23:28:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:48.079 23:28:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:48.079 23:28:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:48.079 23:28:08 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.079 23:28:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:48.079 23:28:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.079 23:28:08 -- common/autotest_common.sh@1187 -- # return 0 00:14:48.079 23:28:08 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.079 23:28:08 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:48.079 23:28:08 -- common/autotest_common.sh@1198 -- # local i=0 00:14:48.079 23:28:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:48.079 23:28:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.079 23:28:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:48.079 23:28:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.079 23:28:08 -- common/autotest_common.sh@1210 -- # return 0 00:14:48.079 23:28:08 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:48.079 23:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.079 23:28:08 -- common/autotest_common.sh@10 -- # set +x 00:14:48.079 23:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.079 23:28:08 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.079 23:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.079 23:28:08 -- common/autotest_common.sh@10 -- # set +x 00:14:48.079 23:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.079 23:28:08 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:48.079 23:28:08 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:48.079 23:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.079 23:28:08 -- common/autotest_common.sh@10 -- # set +x 00:14:48.079 23:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.079 23:28:08 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.079 23:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.079 23:28:08 -- common/autotest_common.sh@10 -- # set +x 00:14:48.079 [2024-07-11 23:28:08.601863] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.079 23:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.079 23:28:08 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:48.079 23:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.079 23:28:08 -- common/autotest_common.sh@10 -- # set +x 00:14:48.079 23:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.079 23:28:08 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:48.079 23:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.079 23:28:08 -- common/autotest_common.sh@10 -- # set +x 00:14:48.079 23:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.079 23:28:08 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:48.645 23:28:09 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:48.645 23:28:09 -- common/autotest_common.sh@1177 -- # local i=0 00:14:48.645 23:28:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:48.645 23:28:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:48.645 23:28:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:50.546 23:28:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:50.546 23:28:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:50.546 23:28:11 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:50.546 23:28:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:50.546 23:28:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:50.546 23:28:11 -- common/autotest_common.sh@1187 -- # return 0 00:14:50.546 23:28:11 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:50.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.546 23:28:11 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:50.546 23:28:11 -- common/autotest_common.sh@1198 -- # local i=0 00:14:50.546 23:28:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:50.546 23:28:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.546 23:28:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:50.546 23:28:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.546 23:28:11 -- common/autotest_common.sh@1210 -- # return 0 00:14:50.546 23:28:11 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:50.546 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.546 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.546 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.546 23:28:11 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.546 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.546 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.546 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.546 23:28:11 -- target/rpc.sh@99 -- # seq 1 5 00:14:50.546 23:28:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:50.546 23:28:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:50.546 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.546 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.546 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.546 23:28:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.546 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.546 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.546 [2024-07-11 23:28:11.435173] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.546 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.546 23:28:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:50.546 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.546 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.546 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.546 23:28:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:50.546 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.546 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.546 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.546 23:28:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.546 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.546 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.546 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.546 23:28:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.546 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.546 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.546 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.546 23:28:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:50.546 23:28:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:50.546 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.546 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.546 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.546 23:28:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.546 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.546 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.546 [2024-07-11 23:28:11.483235] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.546 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.546 23:28:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:50.546 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.546 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.546 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.546 23:28:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:50.546 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.546 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:50.807 23:28:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 [2024-07-11 23:28:11.531398] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:50.807 23:28:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 [2024-07-11 23:28:11.579564] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:50.807 23:28:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 [2024-07-11 23:28:11.627721] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:50.807 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.807 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.807 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.807 23:28:11 -- target/rpc.sh@110 -- # stats='{ 00:14:50.807 "tick_rate": 2700000000, 00:14:50.807 "poll_groups": [ 00:14:50.807 { 00:14:50.807 "name": "nvmf_tgt_poll_group_0", 00:14:50.807 "admin_qpairs": 2, 00:14:50.807 "io_qpairs": 84, 00:14:50.807 "current_admin_qpairs": 0, 00:14:50.807 "current_io_qpairs": 0, 00:14:50.807 "pending_bdev_io": 0, 00:14:50.807 "completed_nvme_io": 319, 00:14:50.807 "transports": [ 00:14:50.807 { 00:14:50.807 "trtype": "TCP" 00:14:50.807 } 00:14:50.807 ] 00:14:50.807 }, 00:14:50.807 { 00:14:50.807 "name": "nvmf_tgt_poll_group_1", 00:14:50.807 "admin_qpairs": 2, 00:14:50.807 "io_qpairs": 84, 00:14:50.807 "current_admin_qpairs": 0, 00:14:50.807 "current_io_qpairs": 0, 00:14:50.807 "pending_bdev_io": 0, 00:14:50.808 "completed_nvme_io": 187, 00:14:50.808 "transports": [ 00:14:50.808 { 00:14:50.808 "trtype": "TCP" 00:14:50.808 } 00:14:50.808 ] 00:14:50.808 }, 00:14:50.808 { 00:14:50.808 "name": "nvmf_tgt_poll_group_2", 00:14:50.808 "admin_qpairs": 1, 00:14:50.808 "io_qpairs": 84, 00:14:50.808 "current_admin_qpairs": 0, 00:14:50.808 "current_io_qpairs": 0, 00:14:50.808 "pending_bdev_io": 0, 00:14:50.808 "completed_nvme_io": 85, 00:14:50.808 "transports": [ 00:14:50.808 { 00:14:50.808 "trtype": "TCP" 00:14:50.808 } 00:14:50.808 ] 00:14:50.808 }, 00:14:50.808 { 00:14:50.808 "name": "nvmf_tgt_poll_group_3", 00:14:50.808 "admin_qpairs": 2, 00:14:50.808 "io_qpairs": 84, 00:14:50.808 "current_admin_qpairs": 0, 00:14:50.808 "current_io_qpairs": 0, 00:14:50.808 "pending_bdev_io": 0, 00:14:50.808 "completed_nvme_io": 95, 00:14:50.808 "transports": [ 00:14:50.808 { 00:14:50.808 "trtype": "TCP" 00:14:50.808 } 00:14:50.808 ] 00:14:50.808 } 00:14:50.808 ] 00:14:50.808 }' 00:14:50.808 23:28:11 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:50.808 23:28:11 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:50.808 23:28:11 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:50.808 23:28:11 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:51.068 23:28:11 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:51.068 23:28:11 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:51.068 23:28:11 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:51.068 23:28:11 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:51.068 23:28:11 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:51.068 23:28:11 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:14:51.068 23:28:11 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:51.068 23:28:11 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:51.068 23:28:11 -- target/rpc.sh@123 -- # nvmftestfini 00:14:51.068 23:28:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:51.068 23:28:11 -- nvmf/common.sh@116 -- # sync 00:14:51.068 23:28:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:51.068 23:28:11 -- nvmf/common.sh@119 -- # set +e 00:14:51.068 23:28:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:51.068 23:28:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:51.068 rmmod nvme_tcp 00:14:51.068 rmmod nvme_fabrics 00:14:51.068 rmmod nvme_keyring 00:14:51.068 23:28:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:51.068 23:28:11 -- nvmf/common.sh@123 -- # set -e 00:14:51.068 23:28:11 -- nvmf/common.sh@124 -- # return 0 00:14:51.068 23:28:11 -- nvmf/common.sh@477 -- # '[' -n 199613 ']' 00:14:51.068 23:28:11 -- nvmf/common.sh@478 -- # killprocess 199613 00:14:51.068 23:28:11 -- common/autotest_common.sh@926 -- # '[' -z 199613 ']' 00:14:51.068 23:28:11 -- common/autotest_common.sh@930 -- # kill -0 199613 00:14:51.068 23:28:11 -- common/autotest_common.sh@931 -- # uname 00:14:51.068 23:28:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:51.068 23:28:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 199613 00:14:51.068 23:28:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:51.068 23:28:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:51.068 23:28:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 199613' 00:14:51.068 killing process with pid 199613 00:14:51.068 23:28:11 -- common/autotest_common.sh@945 -- # kill 199613 00:14:51.068 23:28:11 -- common/autotest_common.sh@950 -- # wait 199613 00:14:51.328 23:28:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:51.328 23:28:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:51.329 23:28:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:51.329 23:28:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.329 23:28:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:51.329 23:28:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.329 23:28:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.329 23:28:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.907 23:28:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:53.907 00:14:53.907 real 0m26.747s 00:14:53.907 user 1m26.641s 00:14:53.907 sys 0m4.724s 00:14:53.907 23:28:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.907 23:28:14 -- common/autotest_common.sh@10 -- # set +x 00:14:53.907 ************************************ 00:14:53.907 END TEST nvmf_rpc 00:14:53.907 ************************************ 00:14:53.907 23:28:14 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:53.907 23:28:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:53.907 23:28:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:53.907 23:28:14 -- common/autotest_common.sh@10 -- # set +x 00:14:53.907 ************************************ 00:14:53.907 START TEST nvmf_invalid 00:14:53.907 ************************************ 00:14:53.907 23:28:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:53.907 * Looking for test storage... 00:14:53.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:53.908 23:28:14 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.908 23:28:14 -- nvmf/common.sh@7 -- # uname -s 00:14:53.908 23:28:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.908 23:28:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.908 23:28:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.908 23:28:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.908 23:28:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.908 23:28:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.908 23:28:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.908 23:28:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.908 23:28:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.908 23:28:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.908 23:28:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:53.908 23:28:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:53.908 23:28:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.908 23:28:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.908 23:28:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.908 23:28:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.908 23:28:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.908 23:28:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.908 23:28:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.908 23:28:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.908 23:28:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.908 23:28:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.908 23:28:14 -- paths/export.sh@5 -- # export PATH 00:14:53.908 23:28:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.908 23:28:14 -- nvmf/common.sh@46 -- # : 0 00:14:53.908 23:28:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:53.908 23:28:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:53.908 23:28:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:53.908 23:28:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.908 23:28:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.908 23:28:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:53.908 23:28:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:53.908 23:28:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:53.908 23:28:14 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:53.908 23:28:14 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.908 23:28:14 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:53.908 23:28:14 -- target/invalid.sh@14 -- # target=foobar 00:14:53.908 23:28:14 -- target/invalid.sh@16 -- # RANDOM=0 00:14:53.908 23:28:14 -- target/invalid.sh@34 -- # nvmftestinit 00:14:53.908 23:28:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:53.908 23:28:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.908 23:28:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:53.908 23:28:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:53.908 23:28:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:53.908 23:28:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.908 23:28:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.908 23:28:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.908 23:28:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:53.908 23:28:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:53.908 23:28:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:53.908 23:28:14 -- common/autotest_common.sh@10 -- # set +x 00:14:56.441 23:28:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:56.441 23:28:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:56.441 23:28:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:56.441 23:28:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:56.441 23:28:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:56.441 23:28:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:56.441 23:28:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:56.441 23:28:16 -- nvmf/common.sh@294 -- # net_devs=() 00:14:56.441 23:28:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:56.441 23:28:16 -- nvmf/common.sh@295 -- # e810=() 00:14:56.441 23:28:16 -- nvmf/common.sh@295 -- # local -ga e810 00:14:56.441 23:28:16 -- nvmf/common.sh@296 -- # x722=() 00:14:56.441 23:28:16 -- nvmf/common.sh@296 -- # local -ga x722 00:14:56.441 23:28:16 -- nvmf/common.sh@297 -- # mlx=() 00:14:56.441 23:28:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:56.441 23:28:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.441 23:28:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.441 23:28:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.441 23:28:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.441 23:28:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.441 23:28:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.441 23:28:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.441 23:28:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.441 23:28:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.441 23:28:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.441 23:28:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.441 23:28:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:56.441 23:28:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:56.441 23:28:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:56.441 23:28:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:56.441 23:28:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:56.441 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:56.441 23:28:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:56.441 23:28:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:56.441 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:56.441 23:28:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:56.441 23:28:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:56.441 23:28:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.441 23:28:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:56.441 23:28:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.441 23:28:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:56.441 Found net devices under 0000:84:00.0: cvl_0_0 00:14:56.441 23:28:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.441 23:28:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:56.441 23:28:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.441 23:28:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:56.441 23:28:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.441 23:28:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:56.441 Found net devices under 0000:84:00.1: cvl_0_1 00:14:56.441 23:28:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.441 23:28:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:56.441 23:28:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:56.441 23:28:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:56.441 23:28:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:56.441 23:28:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.441 23:28:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.441 23:28:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.441 23:28:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:56.441 23:28:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.441 23:28:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.441 23:28:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:56.441 23:28:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.441 23:28:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.441 23:28:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:56.441 23:28:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:56.441 23:28:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.441 23:28:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.441 23:28:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.441 23:28:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.441 23:28:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:56.441 23:28:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.441 23:28:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.441 23:28:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.441 23:28:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:56.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:14:56.441 00:14:56.441 --- 10.0.0.2 ping statistics --- 00:14:56.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.441 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:14:56.441 23:28:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:14:56.441 00:14:56.441 --- 10.0.0.1 ping statistics --- 00:14:56.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.441 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:14:56.441 23:28:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.441 23:28:17 -- nvmf/common.sh@410 -- # return 0 00:14:56.441 23:28:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:56.441 23:28:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.441 23:28:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:56.441 23:28:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:56.441 23:28:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.441 23:28:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:56.441 23:28:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:56.441 23:28:17 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:56.441 23:28:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:56.441 23:28:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:56.441 23:28:17 -- common/autotest_common.sh@10 -- # set +x 00:14:56.441 23:28:17 -- nvmf/common.sh@469 -- # nvmfpid=204320 00:14:56.441 23:28:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.441 23:28:17 -- nvmf/common.sh@470 -- # waitforlisten 204320 00:14:56.441 23:28:17 -- common/autotest_common.sh@819 -- # '[' -z 204320 ']' 00:14:56.441 23:28:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.441 23:28:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:56.441 23:28:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.441 23:28:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:56.441 23:28:17 -- common/autotest_common.sh@10 -- # set +x 00:14:56.441 [2024-07-11 23:28:17.127584] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:56.441 [2024-07-11 23:28:17.127768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.441 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.441 [2024-07-11 23:28:17.237939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.441 [2024-07-11 23:28:17.334197] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:56.441 [2024-07-11 23:28:17.334355] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.441 [2024-07-11 23:28:17.334374] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.441 [2024-07-11 23:28:17.334389] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.441 [2024-07-11 23:28:17.334447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.441 [2024-07-11 23:28:17.334505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.441 [2024-07-11 23:28:17.334573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.441 [2024-07-11 23:28:17.334570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.817 23:28:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:57.817 23:28:18 -- common/autotest_common.sh@852 -- # return 0 00:14:57.817 23:28:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:57.817 23:28:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:57.817 23:28:18 -- common/autotest_common.sh@10 -- # set +x 00:14:57.817 23:28:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.817 23:28:18 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:57.817 23:28:18 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8592 00:14:57.817 [2024-07-11 23:28:18.739644] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:57.817 23:28:18 -- target/invalid.sh@40 -- # out='request: 00:14:57.817 { 00:14:57.817 "nqn": "nqn.2016-06.io.spdk:cnode8592", 00:14:57.817 "tgt_name": "foobar", 00:14:57.817 "method": "nvmf_create_subsystem", 00:14:57.817 "req_id": 1 00:14:57.817 } 00:14:57.817 Got JSON-RPC error response 00:14:57.817 response: 00:14:57.817 { 00:14:57.817 "code": -32603, 00:14:57.817 "message": "Unable to find target foobar" 00:14:57.817 }' 00:14:57.817 23:28:18 -- target/invalid.sh@41 -- # [[ request: 00:14:57.817 { 00:14:57.817 "nqn": "nqn.2016-06.io.spdk:cnode8592", 00:14:57.817 "tgt_name": "foobar", 00:14:57.817 "method": "nvmf_create_subsystem", 00:14:57.817 "req_id": 1 00:14:57.817 } 00:14:57.817 Got JSON-RPC error response 00:14:57.817 response: 00:14:57.817 { 00:14:57.817 "code": -32603, 00:14:57.817 "message": "Unable to find target foobar" 00:14:57.817 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:57.817 23:28:18 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:57.817 23:28:18 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6453 00:14:58.381 [2024-07-11 23:28:19.072764] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6453: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:58.381 23:28:19 -- target/invalid.sh@45 -- # out='request: 00:14:58.381 { 00:14:58.382 "nqn": "nqn.2016-06.io.spdk:cnode6453", 00:14:58.382 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:58.382 "method": "nvmf_create_subsystem", 00:14:58.382 "req_id": 1 00:14:58.382 } 00:14:58.382 Got JSON-RPC error response 00:14:58.382 response: 00:14:58.382 { 00:14:58.382 "code": -32602, 00:14:58.382 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:58.382 }' 00:14:58.382 23:28:19 -- target/invalid.sh@46 -- # [[ request: 00:14:58.382 { 00:14:58.382 "nqn": "nqn.2016-06.io.spdk:cnode6453", 00:14:58.382 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:58.382 "method": "nvmf_create_subsystem", 00:14:58.382 "req_id": 1 00:14:58.382 } 00:14:58.382 Got JSON-RPC error response 00:14:58.382 response: 00:14:58.382 { 00:14:58.382 "code": -32602, 00:14:58.382 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:58.382 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:58.382 23:28:19 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:58.382 23:28:19 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28118 00:14:58.639 [2024-07-11 23:28:19.417872] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28118: invalid model number 'SPDK_Controller' 00:14:58.639 23:28:19 -- target/invalid.sh@50 -- # out='request: 00:14:58.639 { 00:14:58.639 "nqn": "nqn.2016-06.io.spdk:cnode28118", 00:14:58.639 "model_number": "SPDK_Controller\u001f", 00:14:58.639 "method": "nvmf_create_subsystem", 00:14:58.639 "req_id": 1 00:14:58.639 } 00:14:58.639 Got JSON-RPC error response 00:14:58.639 response: 00:14:58.639 { 00:14:58.639 "code": -32602, 00:14:58.639 "message": "Invalid MN SPDK_Controller\u001f" 00:14:58.639 }' 00:14:58.639 23:28:19 -- target/invalid.sh@51 -- # [[ request: 00:14:58.639 { 00:14:58.639 "nqn": "nqn.2016-06.io.spdk:cnode28118", 00:14:58.639 "model_number": "SPDK_Controller\u001f", 00:14:58.639 "method": "nvmf_create_subsystem", 00:14:58.639 "req_id": 1 00:14:58.639 } 00:14:58.639 Got JSON-RPC error response 00:14:58.639 response: 00:14:58.639 { 00:14:58.639 "code": -32602, 00:14:58.639 "message": "Invalid MN SPDK_Controller\u001f" 00:14:58.639 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:58.639 23:28:19 -- target/invalid.sh@54 -- # gen_random_s 21 00:14:58.639 23:28:19 -- target/invalid.sh@19 -- # local length=21 ll 00:14:58.640 23:28:19 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:58.640 23:28:19 -- target/invalid.sh@21 -- # local chars 00:14:58.640 23:28:19 -- target/invalid.sh@22 -- # local string 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 91 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+='[' 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 88 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=X 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 110 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=n 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 114 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=r 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 45 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=- 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 90 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=Z 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 81 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=Q 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 49 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=1 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 111 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=o 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 113 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=q 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 60 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+='<' 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 54 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=6 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 51 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=3 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 87 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=W 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 51 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=3 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 112 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=p 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 42 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+='*' 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 72 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=H 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 48 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=0 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 121 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+=y 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # printf %x 42 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:58.640 23:28:19 -- target/invalid.sh@25 -- # string+='*' 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.640 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.640 23:28:19 -- target/invalid.sh@28 -- # [[ [ == \- ]] 00:14:58.640 23:28:19 -- target/invalid.sh@31 -- # echo '[Xnr-ZQ1oq<63W3p*H0y*' 00:14:58.640 23:28:19 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '[Xnr-ZQ1oq<63W3p*H0y*' nqn.2016-06.io.spdk:cnode25703 00:14:58.899 [2024-07-11 23:28:19.791121] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25703: invalid serial number '[Xnr-ZQ1oq<63W3p*H0y*' 00:14:58.899 23:28:19 -- target/invalid.sh@54 -- # out='request: 00:14:58.899 { 00:14:58.899 "nqn": "nqn.2016-06.io.spdk:cnode25703", 00:14:58.899 "serial_number": "[Xnr-ZQ1oq<63W3p*H0y*", 00:14:58.899 "method": "nvmf_create_subsystem", 00:14:58.899 "req_id": 1 00:14:58.899 } 00:14:58.899 Got JSON-RPC error response 00:14:58.899 response: 00:14:58.899 { 00:14:58.899 "code": -32602, 00:14:58.899 "message": "Invalid SN [Xnr-ZQ1oq<63W3p*H0y*" 00:14:58.899 }' 00:14:58.899 23:28:19 -- target/invalid.sh@55 -- # [[ request: 00:14:58.899 { 00:14:58.899 "nqn": "nqn.2016-06.io.spdk:cnode25703", 00:14:58.899 "serial_number": "[Xnr-ZQ1oq<63W3p*H0y*", 00:14:58.899 "method": "nvmf_create_subsystem", 00:14:58.899 "req_id": 1 00:14:58.899 } 00:14:58.899 Got JSON-RPC error response 00:14:58.899 response: 00:14:58.899 { 00:14:58.899 "code": -32602, 00:14:58.899 "message": "Invalid SN [Xnr-ZQ1oq<63W3p*H0y*" 00:14:58.899 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:58.899 23:28:19 -- target/invalid.sh@58 -- # gen_random_s 41 00:14:58.899 23:28:19 -- target/invalid.sh@19 -- # local length=41 ll 00:14:58.899 23:28:19 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:58.899 23:28:19 -- target/invalid.sh@21 -- # local chars 00:14:58.899 23:28:19 -- target/invalid.sh@22 -- # local string 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # printf %x 84 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # string+=T 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # printf %x 68 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # string+=D 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # printf %x 48 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # string+=0 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # printf %x 34 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # string+='"' 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # printf %x 41 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # string+=')' 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # printf %x 82 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # string+=R 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # printf %x 65 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # string+=A 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # printf %x 117 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # string+=u 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # printf %x 65 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # string+=A 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # printf %x 118 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:58.899 23:28:19 -- target/invalid.sh@25 -- # string+=v 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.899 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 110 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=n 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 109 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=m 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 53 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=5 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 120 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=x 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 63 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+='?' 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 114 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=r 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 78 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=N 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 98 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=b 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 71 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=G 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 102 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=f 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 117 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=u 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 94 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+='^' 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 43 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=+ 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 94 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+='^' 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 109 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=m 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 75 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=K 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 32 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=' ' 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 109 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=m 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 40 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+='(' 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 58 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=: 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 100 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=d 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 89 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=Y 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 70 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=F 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 35 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+='#' 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 80 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=P 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 51 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=3 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 34 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+='"' 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 58 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=: 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 127 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=$'\177' 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 58 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=: 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # printf %x 66 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:59.160 23:28:19 -- target/invalid.sh@25 -- # string+=B 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.160 23:28:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.160 23:28:19 -- target/invalid.sh@28 -- # [[ T == \- ]] 00:14:59.160 23:28:19 -- target/invalid.sh@31 -- # echo 'TD0")RAuAvnm5x?rNbGfu^+^mK m(:dYF#P3"::B' 00:14:59.160 23:28:19 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'TD0")RAuAvnm5x?rNbGfu^+^mK m(:dYF#P3"::B' nqn.2016-06.io.spdk:cnode14233 00:14:59.728 [2024-07-11 23:28:20.505596] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14233: invalid model number 'TD0")RAuAvnm5x?rNbGfu^+^mK m(:dYF#P3"::B' 00:14:59.728 23:28:20 -- target/invalid.sh@58 -- # out='request: 00:14:59.728 { 00:14:59.728 "nqn": "nqn.2016-06.io.spdk:cnode14233", 00:14:59.728 "model_number": "TD0\")RAuAvnm5x?rNbGfu^+^mK m(:dYF#P3\":\u007f:B", 00:14:59.728 "method": "nvmf_create_subsystem", 00:14:59.728 "req_id": 1 00:14:59.728 } 00:14:59.728 Got JSON-RPC error response 00:14:59.728 response: 00:14:59.728 { 00:14:59.728 "code": -32602, 00:14:59.728 "message": "Invalid MN TD0\")RAuAvnm5x?rNbGfu^+^mK m(:dYF#P3\":\u007f:B" 00:14:59.728 }' 00:14:59.728 23:28:20 -- target/invalid.sh@59 -- # [[ request: 00:14:59.728 { 00:14:59.728 "nqn": "nqn.2016-06.io.spdk:cnode14233", 00:14:59.728 "model_number": "TD0\")RAuAvnm5x?rNbGfu^+^mK m(:dYF#P3\":\u007f:B", 00:14:59.728 "method": "nvmf_create_subsystem", 00:14:59.728 "req_id": 1 00:14:59.728 } 00:14:59.728 Got JSON-RPC error response 00:14:59.728 response: 00:14:59.728 { 00:14:59.728 "code": -32602, 00:14:59.728 "message": "Invalid MN TD0\")RAuAvnm5x?rNbGfu^+^mK m(:dYF#P3\":\u007f:B" 00:14:59.728 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:59.728 23:28:20 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:59.987 [2024-07-11 23:28:20.846718] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.987 23:28:20 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:00.554 23:28:21 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:00.554 23:28:21 -- target/invalid.sh@67 -- # echo '' 00:15:00.554 23:28:21 -- target/invalid.sh@67 -- # head -n 1 00:15:00.554 23:28:21 -- target/invalid.sh@67 -- # IP= 00:15:00.555 23:28:21 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:00.813 [2024-07-11 23:28:21.557213] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:00.813 23:28:21 -- target/invalid.sh@69 -- # out='request: 00:15:00.813 { 00:15:00.813 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:00.813 "listen_address": { 00:15:00.813 "trtype": "tcp", 00:15:00.813 "traddr": "", 00:15:00.813 "trsvcid": "4421" 00:15:00.813 }, 00:15:00.813 "method": "nvmf_subsystem_remove_listener", 00:15:00.813 "req_id": 1 00:15:00.813 } 00:15:00.813 Got JSON-RPC error response 00:15:00.813 response: 00:15:00.813 { 00:15:00.813 "code": -32602, 00:15:00.813 "message": "Invalid parameters" 00:15:00.813 }' 00:15:00.814 23:28:21 -- target/invalid.sh@70 -- # [[ request: 00:15:00.814 { 00:15:00.814 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:00.814 "listen_address": { 00:15:00.814 "trtype": "tcp", 00:15:00.814 "traddr": "", 00:15:00.814 "trsvcid": "4421" 00:15:00.814 }, 00:15:00.814 "method": "nvmf_subsystem_remove_listener", 00:15:00.814 "req_id": 1 00:15:00.814 } 00:15:00.814 Got JSON-RPC error response 00:15:00.814 response: 00:15:00.814 { 00:15:00.814 "code": -32602, 00:15:00.814 "message": "Invalid parameters" 00:15:00.814 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:00.814 23:28:21 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20128 -i 0 00:15:01.073 [2024-07-11 23:28:21.838080] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20128: invalid cntlid range [0-65519] 00:15:01.073 23:28:21 -- target/invalid.sh@73 -- # out='request: 00:15:01.073 { 00:15:01.073 "nqn": "nqn.2016-06.io.spdk:cnode20128", 00:15:01.073 "min_cntlid": 0, 00:15:01.073 "method": "nvmf_create_subsystem", 00:15:01.073 "req_id": 1 00:15:01.073 } 00:15:01.073 Got JSON-RPC error response 00:15:01.073 response: 00:15:01.073 { 00:15:01.073 "code": -32602, 00:15:01.073 "message": "Invalid cntlid range [0-65519]" 00:15:01.073 }' 00:15:01.073 23:28:21 -- target/invalid.sh@74 -- # [[ request: 00:15:01.073 { 00:15:01.073 "nqn": "nqn.2016-06.io.spdk:cnode20128", 00:15:01.073 "min_cntlid": 0, 00:15:01.073 "method": "nvmf_create_subsystem", 00:15:01.073 "req_id": 1 00:15:01.073 } 00:15:01.073 Got JSON-RPC error response 00:15:01.073 response: 00:15:01.073 { 00:15:01.073 "code": -32602, 00:15:01.073 "message": "Invalid cntlid range [0-65519]" 00:15:01.073 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:01.073 23:28:21 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27770 -i 65520 00:15:01.332 [2024-07-11 23:28:22.118988] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27770: invalid cntlid range [65520-65519] 00:15:01.332 23:28:22 -- target/invalid.sh@75 -- # out='request: 00:15:01.332 { 00:15:01.332 "nqn": "nqn.2016-06.io.spdk:cnode27770", 00:15:01.332 "min_cntlid": 65520, 00:15:01.332 "method": "nvmf_create_subsystem", 00:15:01.332 "req_id": 1 00:15:01.332 } 00:15:01.332 Got JSON-RPC error response 00:15:01.332 response: 00:15:01.332 { 00:15:01.332 "code": -32602, 00:15:01.332 "message": "Invalid cntlid range [65520-65519]" 00:15:01.332 }' 00:15:01.332 23:28:22 -- target/invalid.sh@76 -- # [[ request: 00:15:01.332 { 00:15:01.332 "nqn": "nqn.2016-06.io.spdk:cnode27770", 00:15:01.332 "min_cntlid": 65520, 00:15:01.332 "method": "nvmf_create_subsystem", 00:15:01.332 "req_id": 1 00:15:01.332 } 00:15:01.332 Got JSON-RPC error response 00:15:01.332 response: 00:15:01.332 { 00:15:01.332 "code": -32602, 00:15:01.332 "message": "Invalid cntlid range [65520-65519]" 00:15:01.332 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:01.332 23:28:22 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30029 -I 0 00:15:01.591 [2024-07-11 23:28:22.456161] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30029: invalid cntlid range [1-0] 00:15:01.591 23:28:22 -- target/invalid.sh@77 -- # out='request: 00:15:01.591 { 00:15:01.591 "nqn": "nqn.2016-06.io.spdk:cnode30029", 00:15:01.591 "max_cntlid": 0, 00:15:01.591 "method": "nvmf_create_subsystem", 00:15:01.591 "req_id": 1 00:15:01.591 } 00:15:01.591 Got JSON-RPC error response 00:15:01.591 response: 00:15:01.591 { 00:15:01.591 "code": -32602, 00:15:01.591 "message": "Invalid cntlid range [1-0]" 00:15:01.591 }' 00:15:01.591 23:28:22 -- target/invalid.sh@78 -- # [[ request: 00:15:01.591 { 00:15:01.591 "nqn": "nqn.2016-06.io.spdk:cnode30029", 00:15:01.591 "max_cntlid": 0, 00:15:01.591 "method": "nvmf_create_subsystem", 00:15:01.591 "req_id": 1 00:15:01.591 } 00:15:01.591 Got JSON-RPC error response 00:15:01.591 response: 00:15:01.591 { 00:15:01.591 "code": -32602, 00:15:01.591 "message": "Invalid cntlid range [1-0]" 00:15:01.591 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:01.591 23:28:22 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32285 -I 65520 00:15:02.161 [2024-07-11 23:28:22.953837] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32285: invalid cntlid range [1-65520] 00:15:02.161 23:28:22 -- target/invalid.sh@79 -- # out='request: 00:15:02.161 { 00:15:02.161 "nqn": "nqn.2016-06.io.spdk:cnode32285", 00:15:02.161 "max_cntlid": 65520, 00:15:02.161 "method": "nvmf_create_subsystem", 00:15:02.161 "req_id": 1 00:15:02.161 } 00:15:02.161 Got JSON-RPC error response 00:15:02.161 response: 00:15:02.161 { 00:15:02.161 "code": -32602, 00:15:02.161 "message": "Invalid cntlid range [1-65520]" 00:15:02.161 }' 00:15:02.161 23:28:22 -- target/invalid.sh@80 -- # [[ request: 00:15:02.161 { 00:15:02.161 "nqn": "nqn.2016-06.io.spdk:cnode32285", 00:15:02.161 "max_cntlid": 65520, 00:15:02.161 "method": "nvmf_create_subsystem", 00:15:02.161 "req_id": 1 00:15:02.161 } 00:15:02.161 Got JSON-RPC error response 00:15:02.161 response: 00:15:02.161 { 00:15:02.161 "code": -32602, 00:15:02.161 "message": "Invalid cntlid range [1-65520]" 00:15:02.161 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:02.161 23:28:22 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31249 -i 6 -I 5 00:15:02.418 [2024-07-11 23:28:23.319108] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31249: invalid cntlid range [6-5] 00:15:02.418 23:28:23 -- target/invalid.sh@83 -- # out='request: 00:15:02.418 { 00:15:02.418 "nqn": "nqn.2016-06.io.spdk:cnode31249", 00:15:02.418 "min_cntlid": 6, 00:15:02.418 "max_cntlid": 5, 00:15:02.418 "method": "nvmf_create_subsystem", 00:15:02.418 "req_id": 1 00:15:02.418 } 00:15:02.418 Got JSON-RPC error response 00:15:02.418 response: 00:15:02.418 { 00:15:02.418 "code": -32602, 00:15:02.418 "message": "Invalid cntlid range [6-5]" 00:15:02.418 }' 00:15:02.418 23:28:23 -- target/invalid.sh@84 -- # [[ request: 00:15:02.418 { 00:15:02.418 "nqn": "nqn.2016-06.io.spdk:cnode31249", 00:15:02.418 "min_cntlid": 6, 00:15:02.418 "max_cntlid": 5, 00:15:02.418 "method": "nvmf_create_subsystem", 00:15:02.418 "req_id": 1 00:15:02.418 } 00:15:02.418 Got JSON-RPC error response 00:15:02.418 response: 00:15:02.418 { 00:15:02.418 "code": -32602, 00:15:02.418 "message": "Invalid cntlid range [6-5]" 00:15:02.418 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:02.418 23:28:23 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:02.676 23:28:23 -- target/invalid.sh@87 -- # out='request: 00:15:02.676 { 00:15:02.676 "name": "foobar", 00:15:02.676 "method": "nvmf_delete_target", 00:15:02.676 "req_id": 1 00:15:02.676 } 00:15:02.676 Got JSON-RPC error response 00:15:02.676 response: 00:15:02.676 { 00:15:02.676 "code": -32602, 00:15:02.676 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:02.676 }' 00:15:02.676 23:28:23 -- target/invalid.sh@88 -- # [[ request: 00:15:02.676 { 00:15:02.676 "name": "foobar", 00:15:02.676 "method": "nvmf_delete_target", 00:15:02.676 "req_id": 1 00:15:02.676 } 00:15:02.676 Got JSON-RPC error response 00:15:02.676 response: 00:15:02.676 { 00:15:02.676 "code": -32602, 00:15:02.676 "message": "The specified target doesn't exist, cannot delete it." 00:15:02.676 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:02.676 23:28:23 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:02.676 23:28:23 -- target/invalid.sh@91 -- # nvmftestfini 00:15:02.676 23:28:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:02.676 23:28:23 -- nvmf/common.sh@116 -- # sync 00:15:02.676 23:28:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:02.676 23:28:23 -- nvmf/common.sh@119 -- # set +e 00:15:02.676 23:28:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:02.676 23:28:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:02.676 rmmod nvme_tcp 00:15:02.676 rmmod nvme_fabrics 00:15:02.676 rmmod nvme_keyring 00:15:02.676 23:28:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:02.676 23:28:23 -- nvmf/common.sh@123 -- # set -e 00:15:02.676 23:28:23 -- nvmf/common.sh@124 -- # return 0 00:15:02.676 23:28:23 -- nvmf/common.sh@477 -- # '[' -n 204320 ']' 00:15:02.676 23:28:23 -- nvmf/common.sh@478 -- # killprocess 204320 00:15:02.676 23:28:23 -- common/autotest_common.sh@926 -- # '[' -z 204320 ']' 00:15:02.676 23:28:23 -- common/autotest_common.sh@930 -- # kill -0 204320 00:15:02.676 23:28:23 -- common/autotest_common.sh@931 -- # uname 00:15:02.676 23:28:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:02.676 23:28:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 204320 00:15:02.676 23:28:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:02.676 23:28:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:02.676 23:28:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 204320' 00:15:02.676 killing process with pid 204320 00:15:02.676 23:28:23 -- common/autotest_common.sh@945 -- # kill 204320 00:15:02.676 23:28:23 -- common/autotest_common.sh@950 -- # wait 204320 00:15:02.934 23:28:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.934 23:28:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:02.934 23:28:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:02.934 23:28:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.934 23:28:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:02.934 23:28:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.934 23:28:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.934 23:28:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.464 23:28:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:05.464 00:15:05.464 real 0m11.494s 00:15:05.464 user 0m31.106s 00:15:05.464 sys 0m3.127s 00:15:05.464 23:28:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.464 23:28:25 -- common/autotest_common.sh@10 -- # set +x 00:15:05.464 ************************************ 00:15:05.464 END TEST nvmf_invalid 00:15:05.464 ************************************ 00:15:05.464 23:28:25 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:15:05.464 23:28:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:05.464 23:28:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:05.464 23:28:25 -- common/autotest_common.sh@10 -- # set +x 00:15:05.464 ************************************ 00:15:05.464 START TEST nvmf_abort 00:15:05.464 ************************************ 00:15:05.464 23:28:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:15:05.464 * Looking for test storage... 00:15:05.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.464 23:28:25 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:05.464 23:28:25 -- nvmf/common.sh@7 -- # uname -s 00:15:05.464 23:28:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.464 23:28:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.464 23:28:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.464 23:28:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.464 23:28:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.464 23:28:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.464 23:28:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.464 23:28:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.464 23:28:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.464 23:28:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.464 23:28:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:05.464 23:28:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:05.464 23:28:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.464 23:28:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.464 23:28:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:05.464 23:28:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:05.464 23:28:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.464 23:28:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.464 23:28:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.464 23:28:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.464 23:28:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.464 23:28:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.464 23:28:25 -- paths/export.sh@5 -- # export PATH 00:15:05.465 23:28:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.465 23:28:25 -- nvmf/common.sh@46 -- # : 0 00:15:05.465 23:28:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:05.465 23:28:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:05.465 23:28:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:05.465 23:28:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.465 23:28:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.465 23:28:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:05.465 23:28:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:05.465 23:28:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:05.465 23:28:25 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:05.465 23:28:25 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:05.465 23:28:25 -- target/abort.sh@14 -- # nvmftestinit 00:15:05.465 23:28:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:05.465 23:28:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.465 23:28:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:05.465 23:28:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:05.465 23:28:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:05.465 23:28:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.465 23:28:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.465 23:28:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.465 23:28:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:05.465 23:28:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:05.465 23:28:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:05.465 23:28:25 -- common/autotest_common.sh@10 -- # set +x 00:15:07.993 23:28:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:07.993 23:28:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:07.993 23:28:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:07.993 23:28:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:07.993 23:28:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:07.993 23:28:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:07.993 23:28:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:07.993 23:28:28 -- nvmf/common.sh@294 -- # net_devs=() 00:15:07.993 23:28:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:07.993 23:28:28 -- nvmf/common.sh@295 -- # e810=() 00:15:07.993 23:28:28 -- nvmf/common.sh@295 -- # local -ga e810 00:15:07.993 23:28:28 -- nvmf/common.sh@296 -- # x722=() 00:15:07.993 23:28:28 -- nvmf/common.sh@296 -- # local -ga x722 00:15:07.993 23:28:28 -- nvmf/common.sh@297 -- # mlx=() 00:15:07.993 23:28:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:07.993 23:28:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:07.993 23:28:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:07.993 23:28:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:07.993 23:28:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:07.993 23:28:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:07.993 23:28:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:07.993 23:28:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:07.993 23:28:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:07.993 23:28:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:07.993 23:28:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:07.993 23:28:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:07.993 23:28:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:07.993 23:28:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:07.993 23:28:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:07.993 23:28:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:07.993 23:28:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:07.993 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:07.993 23:28:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:07.993 23:28:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:07.993 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:07.993 23:28:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:07.993 23:28:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:07.993 23:28:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.993 23:28:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:07.993 23:28:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.993 23:28:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:07.993 Found net devices under 0000:84:00.0: cvl_0_0 00:15:07.993 23:28:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.993 23:28:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:07.993 23:28:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.993 23:28:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:07.993 23:28:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.993 23:28:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:07.993 Found net devices under 0000:84:00.1: cvl_0_1 00:15:07.993 23:28:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.993 23:28:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:07.993 23:28:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:07.993 23:28:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:07.993 23:28:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:07.993 23:28:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:07.993 23:28:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:07.993 23:28:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:07.993 23:28:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:07.993 23:28:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:07.993 23:28:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:07.993 23:28:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:07.993 23:28:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:07.993 23:28:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:07.993 23:28:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:07.993 23:28:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:07.993 23:28:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:07.994 23:28:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:07.994 23:28:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:07.994 23:28:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:07.994 23:28:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:07.994 23:28:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:07.994 23:28:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:07.994 23:28:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:07.994 23:28:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:07.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:15:07.994 00:15:07.994 --- 10.0.0.2 ping statistics --- 00:15:07.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.994 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:15:07.994 23:28:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:07.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:15:07.994 00:15:07.994 --- 10.0.0.1 ping statistics --- 00:15:07.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.994 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:15:07.994 23:28:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.994 23:28:28 -- nvmf/common.sh@410 -- # return 0 00:15:07.994 23:28:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:07.994 23:28:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.994 23:28:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:07.994 23:28:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:07.994 23:28:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.994 23:28:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:07.994 23:28:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:07.994 23:28:28 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:07.994 23:28:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:07.994 23:28:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:07.994 23:28:28 -- common/autotest_common.sh@10 -- # set +x 00:15:07.994 23:28:28 -- nvmf/common.sh@469 -- # nvmfpid=207279 00:15:07.994 23:28:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:07.994 23:28:28 -- nvmf/common.sh@470 -- # waitforlisten 207279 00:15:07.994 23:28:28 -- common/autotest_common.sh@819 -- # '[' -z 207279 ']' 00:15:07.994 23:28:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.994 23:28:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:07.994 23:28:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.994 23:28:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:07.994 23:28:28 -- common/autotest_common.sh@10 -- # set +x 00:15:07.994 [2024-07-11 23:28:28.633807] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:07.994 [2024-07-11 23:28:28.633901] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.994 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.994 [2024-07-11 23:28:28.717985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:07.994 [2024-07-11 23:28:28.813265] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:07.994 [2024-07-11 23:28:28.813444] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.994 [2024-07-11 23:28:28.813463] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.994 [2024-07-11 23:28:28.813478] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.994 [2024-07-11 23:28:28.813540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.994 [2024-07-11 23:28:28.813595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:07.994 [2024-07-11 23:28:28.813598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.931 23:28:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:08.931 23:28:29 -- common/autotest_common.sh@852 -- # return 0 00:15:08.931 23:28:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:08.931 23:28:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:08.931 23:28:29 -- common/autotest_common.sh@10 -- # set +x 00:15:08.931 23:28:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.931 23:28:29 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:15:08.931 23:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.931 23:28:29 -- common/autotest_common.sh@10 -- # set +x 00:15:08.931 [2024-07-11 23:28:29.700739] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.931 23:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.931 23:28:29 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:08.931 23:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.931 23:28:29 -- common/autotest_common.sh@10 -- # set +x 00:15:08.931 Malloc0 00:15:08.931 23:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.931 23:28:29 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:08.931 23:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.931 23:28:29 -- common/autotest_common.sh@10 -- # set +x 00:15:08.931 Delay0 00:15:08.931 23:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.931 23:28:29 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:08.931 23:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.931 23:28:29 -- common/autotest_common.sh@10 -- # set +x 00:15:08.931 23:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.931 23:28:29 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:08.931 23:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.931 23:28:29 -- common/autotest_common.sh@10 -- # set +x 00:15:08.931 23:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.931 23:28:29 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:08.931 23:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.931 23:28:29 -- common/autotest_common.sh@10 -- # set +x 00:15:08.931 [2024-07-11 23:28:29.773967] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.931 23:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.931 23:28:29 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:08.931 23:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.931 23:28:29 -- common/autotest_common.sh@10 -- # set +x 00:15:08.931 23:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.931 23:28:29 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:08.931 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.190 [2024-07-11 23:28:29.951326] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:11.096 Initializing NVMe Controllers 00:15:11.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:11.096 controller IO queue size 128 less than required 00:15:11.096 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:11.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:11.096 Initialization complete. Launching workers. 00:15:11.096 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31575 00:15:11.096 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31636, failed to submit 62 00:15:11.096 success 31575, unsuccess 61, failed 0 00:15:11.096 23:28:31 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:11.096 23:28:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.096 23:28:31 -- common/autotest_common.sh@10 -- # set +x 00:15:11.096 23:28:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.096 23:28:32 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:11.096 23:28:32 -- target/abort.sh@38 -- # nvmftestfini 00:15:11.096 23:28:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:11.096 23:28:32 -- nvmf/common.sh@116 -- # sync 00:15:11.096 23:28:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:11.096 23:28:32 -- nvmf/common.sh@119 -- # set +e 00:15:11.096 23:28:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:11.096 23:28:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:11.096 rmmod nvme_tcp 00:15:11.096 rmmod nvme_fabrics 00:15:11.096 rmmod nvme_keyring 00:15:11.356 23:28:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:11.356 23:28:32 -- nvmf/common.sh@123 -- # set -e 00:15:11.356 23:28:32 -- nvmf/common.sh@124 -- # return 0 00:15:11.356 23:28:32 -- nvmf/common.sh@477 -- # '[' -n 207279 ']' 00:15:11.356 23:28:32 -- nvmf/common.sh@478 -- # killprocess 207279 00:15:11.356 23:28:32 -- common/autotest_common.sh@926 -- # '[' -z 207279 ']' 00:15:11.356 23:28:32 -- common/autotest_common.sh@930 -- # kill -0 207279 00:15:11.356 23:28:32 -- common/autotest_common.sh@931 -- # uname 00:15:11.356 23:28:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:11.356 23:28:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 207279 00:15:11.356 23:28:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:11.356 23:28:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:11.356 23:28:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 207279' 00:15:11.356 killing process with pid 207279 00:15:11.356 23:28:32 -- common/autotest_common.sh@945 -- # kill 207279 00:15:11.356 23:28:32 -- common/autotest_common.sh@950 -- # wait 207279 00:15:11.616 23:28:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:11.616 23:28:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:11.616 23:28:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:11.616 23:28:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.616 23:28:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:11.616 23:28:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.616 23:28:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.616 23:28:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.523 23:28:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:13.523 00:15:13.523 real 0m8.568s 00:15:13.523 user 0m13.161s 00:15:13.523 sys 0m3.129s 00:15:13.523 23:28:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.523 23:28:34 -- common/autotest_common.sh@10 -- # set +x 00:15:13.523 ************************************ 00:15:13.523 END TEST nvmf_abort 00:15:13.523 ************************************ 00:15:13.523 23:28:34 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:15:13.523 23:28:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:13.523 23:28:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:13.523 23:28:34 -- common/autotest_common.sh@10 -- # set +x 00:15:13.523 ************************************ 00:15:13.523 START TEST nvmf_ns_hotplug_stress 00:15:13.523 ************************************ 00:15:13.523 23:28:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:15:13.782 * Looking for test storage... 00:15:13.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.782 23:28:34 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.782 23:28:34 -- nvmf/common.sh@7 -- # uname -s 00:15:13.782 23:28:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.782 23:28:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.782 23:28:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.782 23:28:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.782 23:28:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.782 23:28:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.782 23:28:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.782 23:28:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.782 23:28:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.782 23:28:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.782 23:28:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:13.782 23:28:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:13.782 23:28:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.782 23:28:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.782 23:28:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.782 23:28:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.782 23:28:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.782 23:28:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.782 23:28:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.782 23:28:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.782 23:28:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.782 23:28:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.782 23:28:34 -- paths/export.sh@5 -- # export PATH 00:15:13.782 23:28:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.782 23:28:34 -- nvmf/common.sh@46 -- # : 0 00:15:13.782 23:28:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:13.782 23:28:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:13.782 23:28:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:13.782 23:28:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.782 23:28:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.782 23:28:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:13.782 23:28:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:13.782 23:28:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:13.782 23:28:34 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.782 23:28:34 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:13.782 23:28:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:13.782 23:28:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.782 23:28:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:13.782 23:28:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:13.782 23:28:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:13.782 23:28:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.782 23:28:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.782 23:28:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.782 23:28:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:13.782 23:28:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:13.782 23:28:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:13.783 23:28:34 -- common/autotest_common.sh@10 -- # set +x 00:15:16.348 23:28:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:16.348 23:28:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:16.348 23:28:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:16.348 23:28:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:16.348 23:28:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:16.348 23:28:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:16.348 23:28:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:16.348 23:28:37 -- nvmf/common.sh@294 -- # net_devs=() 00:15:16.348 23:28:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:16.348 23:28:37 -- nvmf/common.sh@295 -- # e810=() 00:15:16.348 23:28:37 -- nvmf/common.sh@295 -- # local -ga e810 00:15:16.348 23:28:37 -- nvmf/common.sh@296 -- # x722=() 00:15:16.348 23:28:37 -- nvmf/common.sh@296 -- # local -ga x722 00:15:16.348 23:28:37 -- nvmf/common.sh@297 -- # mlx=() 00:15:16.348 23:28:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:16.348 23:28:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:16.348 23:28:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:16.348 23:28:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:16.348 23:28:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:16.348 23:28:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:16.348 23:28:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:16.348 23:28:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:16.348 23:28:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:16.348 23:28:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:16.348 23:28:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:16.348 23:28:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:16.348 23:28:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:16.348 23:28:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:16.348 23:28:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:16.348 23:28:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:16.348 23:28:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:16.348 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:16.348 23:28:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:16.348 23:28:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:16.348 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:16.348 23:28:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:16.348 23:28:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:16.348 23:28:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:16.348 23:28:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:16.348 23:28:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:16.348 23:28:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:16.349 23:28:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:16.349 Found net devices under 0000:84:00.0: cvl_0_0 00:15:16.349 23:28:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:16.349 23:28:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:16.349 23:28:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:16.349 23:28:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:16.349 23:28:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:16.349 23:28:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:16.349 Found net devices under 0000:84:00.1: cvl_0_1 00:15:16.349 23:28:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:16.349 23:28:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:16.349 23:28:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:16.349 23:28:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:16.349 23:28:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:16.349 23:28:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:16.349 23:28:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.349 23:28:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.349 23:28:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:16.349 23:28:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:16.349 23:28:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:16.349 23:28:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:16.349 23:28:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:16.349 23:28:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:16.349 23:28:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.349 23:28:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:16.349 23:28:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:16.349 23:28:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:16.349 23:28:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:16.608 23:28:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:16.608 23:28:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:16.608 23:28:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:16.608 23:28:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:16.608 23:28:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:16.608 23:28:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:16.608 23:28:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:16.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:15:16.608 00:15:16.608 --- 10.0.0.2 ping statistics --- 00:15:16.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.608 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:15:16.608 23:28:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:16.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:15:16.608 00:15:16.608 --- 10.0.0.1 ping statistics --- 00:15:16.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.608 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:15:16.608 23:28:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.608 23:28:37 -- nvmf/common.sh@410 -- # return 0 00:15:16.608 23:28:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:16.608 23:28:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.608 23:28:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:16.608 23:28:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:16.608 23:28:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.608 23:28:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:16.608 23:28:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:16.608 23:28:37 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:16.608 23:28:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:16.608 23:28:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:16.608 23:28:37 -- common/autotest_common.sh@10 -- # set +x 00:15:16.608 23:28:37 -- nvmf/common.sh@469 -- # nvmfpid=209791 00:15:16.608 23:28:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:16.608 23:28:37 -- nvmf/common.sh@470 -- # waitforlisten 209791 00:15:16.608 23:28:37 -- common/autotest_common.sh@819 -- # '[' -z 209791 ']' 00:15:16.608 23:28:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.608 23:28:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:16.608 23:28:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.608 23:28:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:16.608 23:28:37 -- common/autotest_common.sh@10 -- # set +x 00:15:16.608 [2024-07-11 23:28:37.495457] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:16.608 [2024-07-11 23:28:37.495552] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.608 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.867 [2024-07-11 23:28:37.584417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:16.867 [2024-07-11 23:28:37.690038] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:16.867 [2024-07-11 23:28:37.690212] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.867 [2024-07-11 23:28:37.690234] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.867 [2024-07-11 23:28:37.690249] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.867 [2024-07-11 23:28:37.690336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.867 [2024-07-11 23:28:37.690402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.867 [2024-07-11 23:28:37.690406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.249 23:28:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:18.249 23:28:38 -- common/autotest_common.sh@852 -- # return 0 00:15:18.249 23:28:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:18.249 23:28:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:18.249 23:28:38 -- common/autotest_common.sh@10 -- # set +x 00:15:18.249 23:28:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.249 23:28:38 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:18.249 23:28:38 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:18.507 [2024-07-11 23:28:39.264748] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.507 23:28:39 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:18.765 23:28:39 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.023 [2024-07-11 23:28:39.920245] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.023 23:28:39 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:19.589 23:28:40 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:19.848 Malloc0 00:15:19.848 23:28:40 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:20.419 Delay0 00:15:20.419 23:28:41 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:20.987 23:28:41 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:21.246 NULL1 00:15:21.246 23:28:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:21.812 23:28:42 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=210374 00:15:21.812 23:28:42 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:21.812 23:28:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:21.813 23:28:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.813 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.070 23:28:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:22.328 23:28:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:22.328 23:28:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:22.586 true 00:15:22.586 23:28:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:22.586 23:28:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:23.154 23:28:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:23.154 23:28:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:23.154 23:28:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:23.729 true 00:15:23.729 23:28:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:23.729 23:28:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:23.987 23:28:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.246 23:28:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:24.246 23:28:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:24.504 true 00:15:24.504 23:28:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:24.504 23:28:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.764 23:28:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:25.332 23:28:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:25.332 23:28:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:25.591 true 00:15:25.591 23:28:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:25.591 23:28:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.850 23:28:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:26.418 23:28:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:26.418 23:28:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:26.678 true 00:15:26.678 23:28:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:26.678 23:28:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.247 23:28:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:27.247 Read completed with error (sct=0, sc=11) 00:15:27.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.516 [2024-07-11 23:28:48.344040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.344965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.345982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.346974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.347032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.347092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.347180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.347245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.347311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.347379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.347456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.347521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.347583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.347646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.347706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.347767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.347825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.347880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.348017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.348077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.348162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.348224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.348614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.348698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.348759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.348821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.348881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.348947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.349014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.349081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.349155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.349217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.349279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.349336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.349397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.349453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.349509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.349572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.516 [2024-07-11 23:28:48.349634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.349691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.349755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.349818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.349870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.349923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.349985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.350961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.351949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.352947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.353009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.353072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.353450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.353515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.353576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.353633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.353694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.353753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.353814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.353872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.353938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.353998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.354949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.355947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.356008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.356067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.356130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.517 [2024-07-11 23:28:48.356209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.356275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.356333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.356398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.356459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.356518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.356753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.356814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.356876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.356933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.356999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.357957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.358949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.359958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.360987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.361047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.361105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.361174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.361236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.361298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.361853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.361920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.361983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.362974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.363029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.363089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.363157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.363221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.518 [2024-07-11 23:28:48.363279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.363338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.363393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.363446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.363501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.363556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.363607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.363658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.363719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.363777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.363837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.363899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.363957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.364959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.365018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.365076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.365133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.365203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.365263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.365323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 23:28:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:27.519 [2024-07-11 23:28:48.365389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.365457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.365519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.365583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 23:28:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:27.519 [2024-07-11 23:28:48.365646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.365706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.365858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.365922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.365990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.366955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.367980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.368977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.369038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.369096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.369177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.369239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.369296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.369356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.369413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.369487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.369534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.519 [2024-07-11 23:28:48.369588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.369646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.370969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.371948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.372993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.373941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.520 [2024-07-11 23:28:48.374988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.375050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.375108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.375202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.375700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.375753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.375800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.375846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.375893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.375951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.376994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.377963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.378021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.378078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.378159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.378389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.378472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.378532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.378589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.378647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.378707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.378767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.378828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.378889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.378947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.379953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.380978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.521 [2024-07-11 23:28:48.381974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.382991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.383049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.383104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.383185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.383242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.383301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.383351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.383400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.383448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.383943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.384955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.385966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.386971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.387999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.522 [2024-07-11 23:28:48.388949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.388995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.389999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.390054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.390108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.390192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.390243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.390292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.390349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.390799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.390862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.390931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.390993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.391969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.392996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.393985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.394045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.394102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.394184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.394250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.394315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.394379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.394438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.394510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.394568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.394634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.394703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.394758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.394883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.394946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.523 [2024-07-11 23:28:48.395830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.395886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.395944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.395996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.396043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.396526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.396578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.396625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.396677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.396731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.396788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.396840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.396887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.396934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.396984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.397986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.398945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.399009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.399069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.399150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.399214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.399275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.399337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.399398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.399477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.399539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.399600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.524 [2024-07-11 23:28:48.399658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.399716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.399778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.399835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.399894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.399950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.400010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.400068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.400149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.400288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.400356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.400419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.400495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.400555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.400611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.400669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.400728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.400784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:27.525 [2024-07-11 23:28:48.400840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.400901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.400956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.401966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.402023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.402077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.402159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.525 [2024-07-11 23:28:48.402218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.402272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.402320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.402374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.402424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.402488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.402551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.402608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.402675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.402736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.403205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.403275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.403340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.403396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.403462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.403514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.403570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.403630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.403684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.403745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.403802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.403851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.403897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.403944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.404991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.405969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.406975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.407994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.408052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.408112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.408194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.408256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.408314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.408893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.408954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.409014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.409074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.409162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.409227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.409289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.409354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.409414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.526 [2024-07-11 23:28:48.409482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.409529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.409583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.409639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.409697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.409760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.409818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.409875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.409924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.409978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.410980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.411987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.412995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.413998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.414955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.527 [2024-07-11 23:28:48.415916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.415974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.416031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.416089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.416172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.416235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.416297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.416358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.416417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.416488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.416549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.416596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.416642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.417981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.418987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.419966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.420029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.420092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.420178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.420243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.420306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.420367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.420428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.420502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.420562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.420624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.420692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.420755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.420814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.420872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.421487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.421557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.421616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.421678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.421734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.421790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.421843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.421891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.421943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.421991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.422955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.423015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.423078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.528 [2024-07-11 23:28:48.423135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.423215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.423275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.423334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.423390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.423439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.423513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.423569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.423622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.423677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.423733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.423797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.423859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.423918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.423977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.424984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.425972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.426031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.426091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.426179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.426242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.426300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.426714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.426779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.426840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.426901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.426959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.427936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.529 [2024-07-11 23:28:48.428913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.428970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.429975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.430987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.431942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.432997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.433057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.433148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.433213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.433270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.433319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.433374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.433437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.433511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.433858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.433911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.433967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.434996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.435052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.435112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.435196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.435257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.435316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.435377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.435448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.435509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.435567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.435615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.435667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.530 [2024-07-11 23:28:48.435724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.435775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.435830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.435884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.435944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.436967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.437964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.438029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.438093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.438165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.438229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.438289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.438349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.438412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.438471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.438533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.438590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.438644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.439996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.440991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.441973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.442028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.442083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.442151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.442215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.442270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.442333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.442383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.442438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.442492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.531 [2024-07-11 23:28:48.442549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.442602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.442651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.442701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.442758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.442810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.442859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.442999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.443989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.444985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.445044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.445105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.445171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.445237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.445307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.445368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.445432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.445494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.445556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.445618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.445686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.445746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.446189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.446254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.446319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.446383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.446448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.446514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.446576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.446639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.446699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.446762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.446823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.446884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.446946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.447973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.448977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.449037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.449098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.449166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.449225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.532 [2024-07-11 23:28:48.449275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.449328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.449393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.449452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.449507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.449565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.449624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.449691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.449742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.449792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.449841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.449900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.449957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.450975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.451040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.451606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.451668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.451718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.451770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.451833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.451892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.451948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.452987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.453987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.454993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.455041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.455106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.455176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.455236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.455295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.455427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.455492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.455551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.455614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.533 [2024-07-11 23:28:48.455674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.455743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.455808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.455872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.455932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.455992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.456973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.457961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.458017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.458072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.458126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.458185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.458243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.458307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.815 [2024-07-11 23:28:48.458368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.458429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.458500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.458562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.458623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.458688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.458756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.458824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.458887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.458959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.459022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.459085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.459153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.459216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.459620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.459688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.459748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.459808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.459870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.459932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.459992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.460948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.461949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.462966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.463016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.463067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.463116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.463174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.463232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.463287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.463337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.463726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.463796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.463859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.463919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.463976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.464944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.465001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.465066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.465117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.465176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.816 [2024-07-11 23:28:48.465235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.465292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.465351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.465404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.465453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.465511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.465578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.465640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.465704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.465768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.465830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.465892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.465956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.466966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.467024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.467081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.467129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.467210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.467268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.467325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.467383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.467442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.467500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.467874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.467937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:27.817 [2024-07-11 23:28:48.467993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.468971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.469978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.470987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.471046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.471106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.471180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.471240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.471290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.471350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.471413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.471473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.471533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.471597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.471653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.471703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.472107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.472186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.472249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.817 [2024-07-11 23:28:48.472309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.472372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.472431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.472490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.472551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.472612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.472677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.472740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.472804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.472865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.472925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.472983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.473947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.474997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.475059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.475118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.475188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.475255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.475311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.475374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.475438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.475505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.475565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.475631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.475693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.475753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.475815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.475875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.476249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.476313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.476368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.476428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.476488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.476544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.476594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.476651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.476706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.476766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.476827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.476882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.476932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.476988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.477960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.478016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.478073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.478131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.478201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.478263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.478322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.478388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.478454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.478515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.478576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.818 [2024-07-11 23:28:48.478643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.478707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.478770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.478828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.478890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.478949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.479875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.480469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.480532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.480595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.480653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.480709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.480769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.480819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.480868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.480917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.480977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.481967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.482950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.483973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.484022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.484073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.484146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.484207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.484341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.484410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.819 [2024-07-11 23:28:48.484473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.484532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.484588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.484651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.484711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.484771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.484832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.484891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.484951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.485009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.485059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.485114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.485186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.485246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.485592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.485653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.485712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.485769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.485825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.485881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.485932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.485981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.486945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.487963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.488957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.489976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.490968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.491032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.491095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.491167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.820 [2024-07-11 23:28:48.491228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.491286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.491344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.491403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.491462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.491521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.491577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.491626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.491677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.491737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.491799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.491862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.491910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.491967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.492025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.492087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.492137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.492199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.492255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.492709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.492774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.492834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.492899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.492959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.493958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.494975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.495947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.496953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.497013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.497069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.497127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.497218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.497280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.497344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.497408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.497485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.497543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.497599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.498107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.498190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.498252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.498312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.498376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.498433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.821 [2024-07-11 23:28:48.498500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.498547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.498604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.498660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.498718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.498765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.498812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.498858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.498918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.498978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.499984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.500965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.501928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.502957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.503970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.504904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.505354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.822 [2024-07-11 23:28:48.505435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.505503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.505565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.505631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.505688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.505745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.505801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.505869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.505929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.505991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.506965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.507945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.508953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.509949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.510007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.510065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.510136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.510210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.510704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.510771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.510840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.510901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.510963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.511954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.512007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.512067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.823 [2024-07-11 23:28:48.512146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.512214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.512275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.512331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.512389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.512470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.512531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.512579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.512631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.512684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.512734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.512794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.512858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.512917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.512974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.513034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.513090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.513174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.513242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.513305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.513366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.513448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.513511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.513567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.513756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.513823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.513883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.513941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.514984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.515974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.516973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.824 [2024-07-11 23:28:48.517934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.517992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.518040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.518096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.518181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.518243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.518302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.518364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.518431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.518513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.518571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.519974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.520950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.521999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.522914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.523981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.524043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.524108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.524203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.524271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.825 [2024-07-11 23:28:48.524334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.524393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.524470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.524530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.524590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.524650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.524708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.524764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.524822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.524890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.524950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.525828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.526317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.526387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.526473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.526540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.526599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.526659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.526718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.526777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.526836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.526896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.526954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.527961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.528983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.529951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.530937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:27.826 [2024-07-11 23:28:48.530987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.531041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.531100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.531183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.826 [2024-07-11 23:28:48.531234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.531773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.531825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.531873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.531924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.531971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.532985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.533952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.534939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.535001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.535060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.535133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.535201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.535250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.535299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.535358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.535418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.535492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.535551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.535608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.535828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.535890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.535949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.536949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.537967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.538027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.538087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.538162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.538213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.827 [2024-07-11 23:28:48.538269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.538322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.538372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.538421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.538501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.538561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.538619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.538681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.538745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.538805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.538867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.538924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.538986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.539043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.539105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.539187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.539252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.539312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.539375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.539454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.539506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.539552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.539607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.539980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.540993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.541982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.542974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.543030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.543078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.543153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.543219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.543276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.543330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.543384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.543457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.543506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.543554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.543603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.543660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.543717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.543777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.544184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.544253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.544314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.828 [2024-07-11 23:28:48.544373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.544431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.544506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.544572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.544636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.544698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.544756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.544812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.544872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.544929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.544986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.545958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.546944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.547983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.548384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.548466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.548529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.548589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.548646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.548706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.548760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.548809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.548861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.548919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.548979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.549956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.550945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.829 [2024-07-11 23:28:48.551007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.551991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.552039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.552097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.552178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.552565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.552633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.552690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.552746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.552802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.552858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.552906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.552960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.553982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.554977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.555980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.556028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.556085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.556167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.556228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.556278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.556335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.556733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.556797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.556855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.556913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.556972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.557980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.558038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.830 [2024-07-11 23:28:48.558093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.558174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.558238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.558298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.558359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.558417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.558500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.558559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.558607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.558658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.558717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.558773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.558829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.558887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.558943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.559952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.560011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.560069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.560151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.560222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.560288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.560352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.560415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.560489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.560546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.560611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.560988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.561988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.562999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.563971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.564032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.564093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.564177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.564242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.564302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.564366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.564431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.564520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.564584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.564643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.564701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.564762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.564817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.565200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.831 [2024-07-11 23:28:48.565264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.565321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.565379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.565452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.565509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.565558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.565608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.565663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.565720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.565775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.565838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.565895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.565958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.566960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.567948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.568991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.569396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.569474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.569533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.569584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.569633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.569681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.569738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.569800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.569862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.569922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.569980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.570992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.571050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.571133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.832 [2024-07-11 23:28:48.571210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.571270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.571333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.571395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.571472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.571530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.571588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.571642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.571689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.571747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.571801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.571856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.571911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.571966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.572960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.573026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.573092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.573178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.573578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.573641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.573699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.573763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.573826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.573886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.573949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.574948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.575970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.576968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.577025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.577073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.577150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.577211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.577271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.577327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.577376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.577781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.577843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.577900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.577948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.577999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.578059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.578116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.833 [2024-07-11 23:28:48.578202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.578266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.578327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.578398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.578475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.578537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.578594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.578651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.578704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.578753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.578806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.578864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.578934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.578996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.579954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.580976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.581029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.581080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.581153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.581207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.581272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.581332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.581390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.581463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.581521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.581585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.581964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.582974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.583966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.834 [2024-07-11 23:28:48.584969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.585032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.585093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.585176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.585238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.585300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.585358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.585418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.585496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.585551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.585616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.585677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.585738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.585798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.586413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.586494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.586556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.586616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.586666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.586722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.586777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.586830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.586881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.586930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.586977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.587948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.588963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.589964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.590024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.590071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.590145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.590213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.590395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.590457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.590533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.590590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.590652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.590715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.590774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.590834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.590897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.590961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.591018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.591081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.591164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.591227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.591285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.591339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.591388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.591462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.591516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.835 [2024-07-11 23:28:48.591570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.591626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.591683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.591738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.591793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.591840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.591899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.591948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.592946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.593944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.594000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.594052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.594100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.594178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.594243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.594617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.594681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.594731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.594782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.594842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.594898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.594948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.594997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.595959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.596983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.597039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.597090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.597151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.836 [2024-07-11 23:28:48.597213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.597277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.597341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.597403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.597467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.597530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.597590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.597654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.597712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.597774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.597837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.597897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.597962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.598026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.598092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.598164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.598231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.598296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.598356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.598742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.598804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:27.837 [2024-07-11 23:28:48.598869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.598941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.599969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.600963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.601994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.602056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.602115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.602186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.602249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.602314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.602375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.602437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.602496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.602547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.602919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.602988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.603995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.604056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.604118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.604188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.837 [2024-07-11 23:28:48.604247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.604303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.604356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.604405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.604463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.604517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.604576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.604638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.604696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.604753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.604816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.604877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.604944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.605955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.606015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.606076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.606155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.606215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.606277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.606336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.606397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.606454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.606511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.606570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.606626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.606682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.607999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.608969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.609948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.610008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.610066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.610128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.610197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.610260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.610324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.610392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.610462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.610524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.610588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.610649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.610709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.610773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.610836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.838 [2024-07-11 23:28:48.611252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.611320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.611383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.611443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.611501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.611560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.611618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.611679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.611754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.611822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.611886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.611945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.612993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.613948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.614922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.615318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.615384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.615445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.615510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.615572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.615633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.615693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.615753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.615814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.615875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.615941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.616956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.839 [2024-07-11 23:28:48.617941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.618959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.619019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.619077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.619150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.619532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.619605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.619667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.619728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.619789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.619850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.619913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.619975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.620969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.621998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.622989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.623049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.623109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.623180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.623243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.623607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.623669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.623728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.623792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.623844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.623896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.623953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.624011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.624072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.624145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.624210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.624271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.624322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.624376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.624435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.624492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.624546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.624596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.624647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.624696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.840 [2024-07-11 23:28:48.624744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.624791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.624861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.624926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.624991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.625938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.626970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.627028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.627082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.627137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.627202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.627259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.627313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.627362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.627751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.627815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.627877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.627945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.628959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.629941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.630002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.630058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.630113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.630182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.630241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.630297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.630347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.630403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.630460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.630523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.630582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.630641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.841 [2024-07-11 23:28:48.630705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.630760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.630814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.630869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.630920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.630969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.631027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.631086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.631156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.631222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.631283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.631342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.631399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.631456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.631519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.631904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.631962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.632996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.633969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.634970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.635031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.635094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.635164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.635231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.635297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.635365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.635430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.635493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.635552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.635611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.635979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.636988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.637046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.637105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.637176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.637239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.637301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.637366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.637426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.637496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.842 [2024-07-11 23:28:48.637563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.637623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.637684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.637744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.637805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.637864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.637924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.637982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.638948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.639003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.639054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.639113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.639182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.639241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.639293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.639342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.639398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.639452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.639501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.639550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.639609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.639677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.640955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.641995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.642959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.643895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.644280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.644347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.644410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.644474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.843 [2024-07-11 23:28:48.644539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.644606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.644668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.644734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.644799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.644858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.644918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.644980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.645983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.646970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.647918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.648979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.649951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.844 [2024-07-11 23:28:48.650995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.651985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.652488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.652554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.652605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.652656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.652705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.652766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.652825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.652886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.652944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.653958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.654987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.655963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.656022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.656084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.656135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.656585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.656650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.656714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.656775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.656838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.656903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.656965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.657027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.657090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.657158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.657218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 [2024-07-11 23:28:48.657276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.845 true 00:15:27.845 [2024-07-11 23:28:48.657336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.657398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.657456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.657516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.657571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.657635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.657694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.657744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.657796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.657859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.657918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.657971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.658947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.659989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.660050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.660099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.660164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.660224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.660282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.660345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.660409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.660867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.660922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.660982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.661988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.662981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.663037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.663094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.663165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.663217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.663283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.663346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.663407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.663466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.663525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.663588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.846 [2024-07-11 23:28:48.663647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.663707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.663783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.663840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.663898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.663961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.664020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.664079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.664148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.664210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.664277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.664346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.664410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.664473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.664534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.664594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.664656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:27.847 [2024-07-11 23:28:48.665068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.665146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.665215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.665277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.665337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.665396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.665455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.665517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.665578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.665644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.665715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.665777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.665838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.665901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.665961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.666990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.667968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.668787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.669224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.669288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.669348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.669419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.669484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.669546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.669606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.669666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.669724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.669784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.669844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.669905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.669966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.670028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.670088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.670155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.670216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.670283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.670350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.670412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.670478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.670539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.670596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.847 [2024-07-11 23:28:48.670659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.670724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.670791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.670855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.670919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.670980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.671943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.672962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.673022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 23:28:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:27.848 [2024-07-11 23:28:48.673446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.673527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 23:28:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.848 [2024-07-11 23:28:48.673587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.673656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.673716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.673773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.673835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.673892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.673946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.674995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.675988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.676983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.677039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.677096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.677178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.677239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.848 [2024-07-11 23:28:48.677619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.677682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.677739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.677797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.677854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.677923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.677986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.678938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.679942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.680971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.681018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.681076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.681155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.681219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.681276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.681326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.681382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.681447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.681498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.681866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.681926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.681988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.682964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.683015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.683069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.683146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.683203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.683261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.683319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.683368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.683427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.849 [2024-07-11 23:28:48.683501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.683560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.683622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.683683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.683748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.683809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.683870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.683931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.683989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.684944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.685000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.685060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.685135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.685206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.685276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.685340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.685399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.685474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.685529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.685592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.685641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.685692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.685745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.686945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.687943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.688959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.689011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.689067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.689136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.689200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.689249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.689298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.689360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.850 [2024-07-11 23:28:48.689421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.689493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.689553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.689610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.689666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.689723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.689783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.689841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.690251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.690318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.690378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.690438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.690513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.690575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.690642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.690709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.690768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.690825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.690883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.690942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.691993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.692978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.693987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.694047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.694106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.694532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.694596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.694646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.851 [2024-07-11 23:28:48.694693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.694746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.694813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.694870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.694924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.694978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.695966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.696962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.697971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.698027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.698083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.698168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.698227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.698615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.698675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.698732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.698788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.698837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.698891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.698947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.699989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.700990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.701044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.852 [2024-07-11 23:28:48.701107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.701191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.701255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.701303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.701360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.701426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.701498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.701560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.701621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.701680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.701731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.701785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.701846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.701898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.701957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.702006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.702053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.702109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.702190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.702241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.702291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.702352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.702422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.702821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.702885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.702943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.703993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.704958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.705947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.706009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.706068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.706151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.706214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.706272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.706332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.706392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.706467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.706524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.706582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.706957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.707940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.708005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.853 [2024-07-11 23:28:48.708063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.708968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.709958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.710011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.710073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.710152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.710206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.710266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.710325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.710381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.710444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.710501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.710556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.710611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.710664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.710720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.711977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.712977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.713026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.713082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.713161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.713220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.713279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.713334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.713394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.713460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.854 [2024-07-11 23:28:48.713516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.713578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.713635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.713697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.713746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.713800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.713849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.713897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.713943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.713998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.714056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.714116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.714199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.714265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.714323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.714395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.714476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.714539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.714600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.714659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.714720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.714777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.714834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.714893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.715305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.715375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.715453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.715510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.715566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.715625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.715683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.715748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.715811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.715867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.715914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.715967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.716995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.717984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.718959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.719018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.719074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.719153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.719542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.719604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.719659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.719713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.719761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.719821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.719877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.719931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.719979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.720959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.721953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.722944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.855 [2024-07-11 23:28:48.723005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.723067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.723150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.723213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.723603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.723671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.723733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.723794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.723852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.723910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.723968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.724974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.725956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.726994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.727041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.727090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.727175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.727239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.727306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.727368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.727446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.727826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.727887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.727951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 Message suppressed 999 times: [2024-07-11 23:28:48.728195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 Read completed with error (sct=0, sc=15) 00:15:27.856 [2024-07-11 23:28:48.728262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.728989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.729998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.730951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.731012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.731071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.856 [2024-07-11 23:28:48.731128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.731222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.731278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.731337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.731401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.731474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.731529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.731578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.731635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.731998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.732976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.733995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.734944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.735875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.736298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.736365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.736428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.736496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.736543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.736598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.736658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.736713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.736762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.736810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.736873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.736931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.736987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.737993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.738053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.857 [2024-07-11 23:28:48.738109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.738202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.738265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.738329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.738391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.738466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.738527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.738584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.738644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.738704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.738760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.738819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.738880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.738938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.738996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.739959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.740355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.740413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.740478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.740536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.740585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.740634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.740692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.740750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.740818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.740877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.740939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.740999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.741061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.741133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.741204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.741264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.741324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.741383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.741444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.741521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.741579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:27.858 [2024-07-11 23:28:48.741636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.741696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.741755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.741811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.741868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.741928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.741987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.742975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.743996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.744052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.744111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.744544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.744613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.744672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.744731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.744795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.744848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.744896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.744949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.745955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.746015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.746071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.746158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.746223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.746286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.746354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.746418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.746498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.746567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.746625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.130 [2024-07-11 23:28:48.746689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.746751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.746813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.746872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.746931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.746989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.747942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.748000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.748062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.748137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.748212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.748279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.748678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.748739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.748798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.748855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.748919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.748979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.749983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.750994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.751964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.752012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.131 [2024-07-11 23:28:48.752058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.752113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.752188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.752243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.752292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.752340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.752390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.753972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.754986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.755999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.756056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.756133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.756205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.756265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.756327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.756386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.756446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.756522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.756580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.756641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.756698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.756765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.756825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.756887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.757957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.758020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.132 [2024-07-11 23:28:48.758081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.758952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.759957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.760841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.761234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.761289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.761351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.761412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.761489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.761549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.761608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.761665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.761729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.761785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.761851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.761918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.761982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.762988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.763044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.763100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.763179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.763232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.763294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.763351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.763416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.763485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.763536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.763591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.133 [2024-07-11 23:28:48.763650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.763714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.763780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.763844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.763899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.763959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.764959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.765017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.765075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.765497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.765564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.765620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.765674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.765736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.765783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.765836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.765893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.765952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.766956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.767996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.768997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.769053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.769133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.769208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.769601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.769669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.134 [2024-07-11 23:28:48.769726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.769787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.769846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.769904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.769961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.770997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.771942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.772960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.773021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.773078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.773157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.773208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.773263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.773324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.773383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.773756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.773809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.773858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.773910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.773964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.774011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.774058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.774117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.774202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.774262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.774321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.774381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.135 [2024-07-11 23:28:48.774461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.774525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.774587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.774647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.774694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.774746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.774802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.774863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.774919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.774966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.775973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.776988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.777052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.777109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.777195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.777255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.777317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.777379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.777453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.777509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.777573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.777949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.778997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.779964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.780041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.780101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.780180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.780246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.780308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.136 [2024-07-11 23:28:48.780356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.780413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.780491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.780549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.780605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.780660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.780716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.780775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.780823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.780877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.780934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.780988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.781042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.781098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.781175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.781238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.781296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.781350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.781398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.781446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.781519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.781577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.781636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.781703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.781766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.782173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.782239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.782302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.782366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.782440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.782502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.782569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.782627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.782684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.782745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.782804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.782853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.782900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.782953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.783983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.784978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.137 [2024-07-11 23:28:48.785978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.786607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.786666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.786722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.786780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.786829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.786883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.786939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.786994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.787976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.788968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.789980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.790042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.790103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.790186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.790248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.790312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.790381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.790571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.790632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.790690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.790745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.790804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.790860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.790918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.790966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.138 [2024-07-11 23:28:48.791951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.792962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.793971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.794029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.794089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.794171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.794238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.794289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.794340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.794401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.794774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.794835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.794890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.794939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.794991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:28.139 [2024-07-11 23:28:48.795410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.795983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.796970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.797029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.797076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.797153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.797216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.797269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.797331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.797392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.797452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.797531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.797592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.797658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.139 [2024-07-11 23:28:48.797722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.797783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.797843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.797901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.797960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.798016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.798073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.798159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.798223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.798284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.798351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.798410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.798487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.798549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.798613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.798991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.799974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.800982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.801978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.802028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.802081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.802163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.802217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.802275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.802329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.802378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.140 [2024-07-11 23:28:48.802433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.802504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.802553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.802613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.802669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.802736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.803115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.803204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.803265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.803323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.803389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.803472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.803536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.803595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.803654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.803714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.803772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.803829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.803887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.803945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.804953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.805993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.806048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.806106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.806193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.806264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.806327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.806388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.806464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.806526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.806586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.806645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.806705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.806764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.806826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.806887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.807295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.807362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.807430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.807513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.807570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.807626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.807681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.807730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.807778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.807832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.807887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.807946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.807999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.141 [2024-07-11 23:28:48.808855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.808913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.808974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.809967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.810984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.811395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.811482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.811551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.811613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.811671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.811729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.811788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.811846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.811904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.811963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.812991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.813989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.814040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.814096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.814176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.814228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.814277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.814337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.814405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.814484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.814545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.814605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.814668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.814724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.814782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.142 [2024-07-11 23:28:48.814839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.814897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.814953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.815012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.815071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.815153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.815219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.815622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.815683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.815741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.815798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.815859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.815922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.815976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.816991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.817974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.818951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.819007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.819067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.819124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.819216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.819267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.819318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.819373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.819758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.819822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.819881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.819945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.143 [2024-07-11 23:28:48.820903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.820963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.821963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.822980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.823041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.823100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.823178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.823232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.823288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.823342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.823401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.823473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.823523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.823571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.823618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.823988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.824958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.825992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.826045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.826097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.826173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.826232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.826292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.826349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.826408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.826484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.144 [2024-07-11 23:28:48.826543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.826607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.826673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.826736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.826798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.826856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.826914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.826973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.827032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.827089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.827173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.827235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.827301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.827364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.827446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.827505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.827563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.827626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.827686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.827744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.827805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.827862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.828256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.828321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.828371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.828435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.828497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.828559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.828615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.828672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.828727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.828782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.828831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.828885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.828947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.829952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.145 [2024-07-11 23:28:48.830959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.831969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.832375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.832454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.832515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.832576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.832636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.832695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.832753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.832809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.832867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.832923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.832982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.833992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.834972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.835970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.836030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.836083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.836154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.836535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.836590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.836644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.836701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.836757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.836810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.146 [2024-07-11 23:28:48.836865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.836912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.836958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.837959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.838985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.839993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.840051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.840110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.840194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.840257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.840314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.840715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.840778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.840827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.840876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.840931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.840993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.841956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.842960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.147 [2024-07-11 23:28:48.843005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.843990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.844038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.844096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.844180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.844233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.844288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.844343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.844396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.844451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.844834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.844901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.844963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.845948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.846989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.847982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.848052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.848118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.848185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.848245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.848303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.848363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.848424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.848485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.848544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.848611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.849013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.148 [2024-07-11 23:28:48.849084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.849976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.850940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.851959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.852017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.852069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.852123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.852197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.852261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.852321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.852385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.852445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.852510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.852573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.852634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.852692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.853974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.149 [2024-07-11 23:28:48.854973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.855958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.856883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.857261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.857333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.857395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.857459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.857524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.857586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.857651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.857711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.857772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.857835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.857886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.857942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.858962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.859021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.859080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.859154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.859222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.859285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.859346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.859405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.859465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.859527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.859589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.859650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.150 [2024-07-11 23:28:48.859714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.859773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.859835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.859896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.859962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.860995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.861062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.861463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 Message suppressed 999 times: [2024-07-11 23:28:48.861532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 Read completed with error (sct=0, sc=15) 00:15:28.151 [2024-07-11 23:28:48.861597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.861660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.861729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.861792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.861858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.861916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.861977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.862985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.863983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.864983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.865031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.865082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.865780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.865848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.865910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.865973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.151 [2024-07-11 23:28:48.866030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.866997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.867940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.868996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.869058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.869121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.869194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.869262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.869321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.869381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.869440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.869501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.869560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.869624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.869793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.869860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.869920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.869986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.870955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.871013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.871070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.871123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.871186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.871246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.871300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.871350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.871398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.871457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.871505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.871554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.152 [2024-07-11 23:28:48.871613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.871676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.871738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.871799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.871865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.871926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.871990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.872962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.873026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.873095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.873167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.873231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.873291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.873350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.873407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.873466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.873829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.873900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.873966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.874979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.875979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.876948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.877003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.877065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.877125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.877190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.877251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.877310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.877369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.877419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.877476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.877543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.877606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.153 [2024-07-11 23:28:48.877660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.877715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.878986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.879973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.880965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.881015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.881076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.881133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.881201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.881263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.881323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.881383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.881443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.881506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.881571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.881641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.881701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.881762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.881822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.882220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.882289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.882356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.882428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.882494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.882560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.882619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.882680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.882738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.882801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.882861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.882923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.882990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.883050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.883115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.883187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.154 [2024-07-11 23:28:48.883251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.883312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.883371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.883432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.883492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.883557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.883623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.883685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.883749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.883813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.883876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.883935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.883996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.884981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.885913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.886314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.886383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.886444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.886504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.886563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.886629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.886686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.886749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.886802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.886859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.886915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.886972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.887969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.888028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.888085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.888150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.888209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.888270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.155 [2024-07-11 23:28:48.888330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.888398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.888458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.888525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.888587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.888647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.888708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.888773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.888827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.888877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.888940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.888997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.889957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.890016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.890074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.890479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.890543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.890605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.890666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.890726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.890787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.890850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.890913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.890974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.891985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.892957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.893988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.894057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.156 [2024-07-11 23:28:48.894116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.894185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.894563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.894630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.894698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.894761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.894823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.894884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.894948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.895941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.896962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.897968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.898019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.898076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.898133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.898196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.898257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.898315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.898375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.898751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.898816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.898873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.898932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.898990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.157 [2024-07-11 23:28:48.899989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.900992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.901937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.902008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.902072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.902134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.902205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.902265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.902328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.902385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.902442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.902819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.902887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.902952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.903974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.904981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.905040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.905107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.905173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.905237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.905297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.905356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.905415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.905466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.905521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.905581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.158 [2024-07-11 23:28:48.905639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.905697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.905764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.905822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.905887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.905937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.905990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.906053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.906114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.906175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.906226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.906279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.906335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.906385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.906434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.906484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.906545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.906933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.907957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.908942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.909945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.910011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.910073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.910149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.910212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.910273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.910334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.910391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.910449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.910508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.910561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.910611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.910669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.910725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.910789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.911162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.911226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.159 [2024-07-11 23:28:48.911277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.911333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.911389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.911439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.911488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.911537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.911598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.911662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.911719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.911781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.911842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.911902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.911962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.912998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.913988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.914920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.915320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.915386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.915451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.915511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.915572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.915632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.915695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.915759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.915822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.915885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.915949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.916008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.916068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.916131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.160 [2024-07-11 23:28:48.916204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.916267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.916326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.916390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.916454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.916514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.916565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.916621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.916681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.916739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.916796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.916857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.916920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.916980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.917989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.918998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.919058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.919124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.919488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.919555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.919615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.919673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.919730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.919779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.919839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.919899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.919959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.920945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.921006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.921066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.921123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.921194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.921262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.921319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.921376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.921426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.921481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.921536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.161 [2024-07-11 23:28:48.921593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.921654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.921715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.921774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.921836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.921894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.921954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.922961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.923019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.923078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.923146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.923208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.923580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.923640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.923699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.923757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.923819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.923873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.923923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.923972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.924950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.925978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.926941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.927006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.927070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.927131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.927213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.927282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 [2024-07-11 23:28:48.927347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.162 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:28.162 [2024-07-11 23:28:48.927738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.927804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.927864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.927931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.927995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.928938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.929945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.930995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.931058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.931127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.931198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.931256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.931323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.931380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.931444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.931504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.931876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.931942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.932942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.933000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.933059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.933118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.933180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.163 [2024-07-11 23:28:48.933244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.933294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.933342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.933391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.933451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.933522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.933587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.933648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.933707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.933769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.933832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.933892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.933948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.934944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.935002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.935060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.935120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.935181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.935231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.935293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.935353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.935409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.935471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.935535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.935602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.935654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.936952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.937953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.164 [2024-07-11 23:28:48.938931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.938981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.939039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.939090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.939148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.939198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.939256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.939314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.939376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.939436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.939496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.939558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.939619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.939679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.939738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.939799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.940971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.941949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.942993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.943053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.943121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.943190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.165 [2024-07-11 23:28:48.943250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.943311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.943370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.943430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.943488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.943547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.943606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.943668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.943728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.944991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.945943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.946997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.947863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.948325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.948389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.948457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.948522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.948583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.948647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.948709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.948767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.948824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.948888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.948949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.949012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.949069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.949119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.949177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.166 [2024-07-11 23:28:48.949241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.949298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.949357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.949407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.949466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.949525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.949581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.949633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.949683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.949739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.949796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.949854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.949908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.949963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.950973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.951951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.952013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.952073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 [2024-07-11 23:28:48.952131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:28.167 23:28:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:28.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.425 23:28:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:28.425 23:28:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:28.683 true 00:15:28.683 23:28:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:28.683 23:28:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.249 23:28:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.538 23:28:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:29.538 23:28:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:29.796 true 00:15:29.796 23:28:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:29.796 23:28:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.054 23:28:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:30.313 23:28:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:30.313 23:28:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:30.879 true 00:15:30.879 23:28:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:30.879 23:28:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.447 23:28:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.705 23:28:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:31.705 23:28:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:31.964 true 00:15:32.221 23:28:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:32.221 23:28:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.478 23:28:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:32.735 23:28:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:32.735 23:28:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:32.992 true 00:15:32.993 23:28:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:32.993 23:28:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.558 23:28:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.816 23:28:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:33.816 23:28:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:34.074 true 00:15:34.074 23:28:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:34.074 23:28:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.332 23:28:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:34.590 23:28:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:34.590 23:28:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:35.157 true 00:15:35.157 23:28:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:35.157 23:28:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.727 23:28:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.987 23:28:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:35.987 23:28:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:36.244 true 00:15:36.244 23:28:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:36.244 23:28:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.503 23:28:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.072 23:28:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:37.072 23:28:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:37.330 true 00:15:37.330 23:28:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:37.330 23:28:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.897 23:28:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.897 23:28:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:37.897 23:28:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:38.464 true 00:15:38.464 23:28:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:38.464 23:28:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.722 23:28:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.981 23:28:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:38.981 23:28:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:39.241 true 00:15:39.241 23:29:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:39.241 23:29:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.210 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.210 23:29:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:40.210 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.210 23:29:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:40.210 23:29:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:40.468 true 00:15:40.725 23:29:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:40.725 23:29:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.982 23:29:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:41.239 23:29:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:41.239 23:29:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:41.497 true 00:15:41.497 23:29:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:41.497 23:29:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.754 23:29:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:42.011 23:29:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:42.011 23:29:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:42.268 true 00:15:42.268 23:29:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:42.268 23:29:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.200 23:29:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:43.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.458 23:29:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:43.458 23:29:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:43.715 true 00:15:43.715 23:29:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:43.715 23:29:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.974 23:29:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:44.542 23:29:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:44.542 23:29:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:44.800 true 00:15:44.800 23:29:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:44.800 23:29:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.059 23:29:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:45.624 23:29:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:45.624 23:29:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:45.883 true 00:15:45.883 23:29:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:45.883 23:29:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:46.140 23:29:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:46.706 23:29:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:46.706 23:29:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:46.964 true 00:15:46.964 23:29:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:46.965 23:29:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:47.222 23:29:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.788 23:29:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:47.788 23:29:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:48.075 true 00:15:48.075 23:29:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:48.075 23:29:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.333 23:29:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:48.592 23:29:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:48.592 23:29:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:49.159 true 00:15:49.159 23:29:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:49.159 23:29:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.415 23:29:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:49.673 23:29:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:49.673 23:29:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:50.237 true 00:15:50.237 23:29:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:50.237 23:29:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.495 23:29:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:50.752 23:29:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:50.752 23:29:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:51.011 true 00:15:51.269 23:29:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:51.269 23:29:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:51.527 23:29:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:51.784 23:29:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:51.784 23:29:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:52.350 true 00:15:52.350 Initializing NVMe Controllers 00:15:52.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:52.350 Controller IO queue size 128, less than required. 00:15:52.350 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:52.350 Controller IO queue size 128, less than required. 00:15:52.350 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:52.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:52.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:52.350 Initialization complete. Launching workers. 00:15:52.350 ======================================================== 00:15:52.350 Latency(us) 00:15:52.350 Device Information : IOPS MiB/s Average min max 00:15:52.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 964.81 0.47 20260.98 1917.24 1011455.70 00:15:52.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3828.22 1.87 33342.24 2283.11 358121.28 00:15:52.350 ======================================================== 00:15:52.350 Total : 4793.03 2.34 30709.05 1917.24 1011455.70 00:15:52.350 00:15:52.350 23:29:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 210374 00:15:52.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (210374) - No such process 00:15:52.350 23:29:13 -- target/ns_hotplug_stress.sh@53 -- # wait 210374 00:15:52.350 23:29:13 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:52.607 23:29:13 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:52.864 23:29:13 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:52.864 23:29:13 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:52.864 23:29:13 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:52.864 23:29:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:52.864 23:29:13 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:53.430 null0 00:15:53.430 23:29:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:53.430 23:29:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:53.430 23:29:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:53.687 null1 00:15:53.687 23:29:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:53.687 23:29:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:53.687 23:29:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:53.945 null2 00:15:53.945 23:29:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:53.945 23:29:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:53.945 23:29:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:54.203 null3 00:15:54.203 23:29:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:54.203 23:29:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:54.203 23:29:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:54.461 null4 00:15:54.461 23:29:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:54.461 23:29:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:54.461 23:29:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:55.026 null5 00:15:55.026 23:29:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:55.026 23:29:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:55.026 23:29:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:55.284 null6 00:15:55.284 23:29:16 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:55.284 23:29:16 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:55.284 23:29:16 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:55.542 null7 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@66 -- # wait 214539 214540 214542 214544 214546 214548 214550 214552 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.542 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:55.799 23:29:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:55.799 23:29:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:55.799 23:29:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:55.799 23:29:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:55.799 23:29:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:55.799 23:29:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.799 23:29:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:55.799 23:29:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.056 23:29:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:56.314 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:56.314 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.314 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:56.314 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:56.314 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:56.314 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:56.314 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:56.314 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.572 23:29:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:56.830 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:56.830 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:56.830 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:56.830 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:56.830 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:56.830 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.830 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:56.830 23:29:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:57.089 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.089 23:29:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.089 23:29:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.089 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:57.347 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.347 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.347 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:57.347 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.347 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.347 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:57.347 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:57.606 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.606 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:57.606 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:57.606 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:57.606 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:57.606 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:57.606 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:57.606 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.606 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.606 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:57.865 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:58.124 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:58.124 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:58.124 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:58.124 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:58.124 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.124 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:58.124 23:29:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:58.124 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.124 23:29:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.124 23:29:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:58.383 23:29:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:58.641 23:29:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:58.641 23:29:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:58.641 23:29:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.641 23:29:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:58.641 23:29:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:58.641 23:29:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:58.641 23:29:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:58.641 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.641 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.641 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.900 23:29:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:59.159 23:29:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:59.159 23:29:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:59.159 23:29:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:59.159 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.159 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.159 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:59.417 23:29:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:59.417 23:29:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.417 23:29:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:59.417 23:29:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:59.417 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.417 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.417 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:59.417 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.417 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.417 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:59.417 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.417 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.417 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:59.418 23:29:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.675 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:59.934 23:29:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:59.934 23:29:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:59.934 23:29:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.934 23:29:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:59.934 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.934 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.934 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:59.934 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.934 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.934 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:59.934 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.934 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.934 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:59.934 23:29:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:00.193 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.193 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.193 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:00.193 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.193 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.193 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:00.193 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.193 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.193 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:00.193 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.193 23:29:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.193 23:29:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:00.193 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:00.193 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:00.193 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:00.451 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.451 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.451 23:29:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:00.451 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:00.451 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.451 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:00.451 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:00.451 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.452 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.452 23:29:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:00.452 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.452 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.452 23:29:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:00.709 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:00.967 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:00.967 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:00.967 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.967 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.967 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.967 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.967 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.967 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:00.967 23:29:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:00.967 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.967 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.226 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.226 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.226 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.226 23:29:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.226 23:29:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.226 23:29:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.226 23:29:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.226 23:29:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.226 23:29:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.226 23:29:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.226 23:29:22 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:01.226 23:29:22 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:01.226 23:29:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:01.226 23:29:22 -- nvmf/common.sh@116 -- # sync 00:16:01.226 23:29:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:01.226 23:29:22 -- nvmf/common.sh@119 -- # set +e 00:16:01.226 23:29:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:01.226 23:29:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:01.226 rmmod nvme_tcp 00:16:01.226 rmmod nvme_fabrics 00:16:01.226 rmmod nvme_keyring 00:16:01.226 23:29:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:01.226 23:29:22 -- nvmf/common.sh@123 -- # set -e 00:16:01.226 23:29:22 -- nvmf/common.sh@124 -- # return 0 00:16:01.226 23:29:22 -- nvmf/common.sh@477 -- # '[' -n 209791 ']' 00:16:01.226 23:29:22 -- nvmf/common.sh@478 -- # killprocess 209791 00:16:01.226 23:29:22 -- common/autotest_common.sh@926 -- # '[' -z 209791 ']' 00:16:01.226 23:29:22 -- common/autotest_common.sh@930 -- # kill -0 209791 00:16:01.226 23:29:22 -- common/autotest_common.sh@931 -- # uname 00:16:01.485 23:29:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:01.485 23:29:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 209791 00:16:01.485 23:29:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:01.485 23:29:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:01.485 23:29:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 209791' 00:16:01.485 killing process with pid 209791 00:16:01.485 23:29:22 -- common/autotest_common.sh@945 -- # kill 209791 00:16:01.485 23:29:22 -- common/autotest_common.sh@950 -- # wait 209791 00:16:01.744 23:29:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:01.744 23:29:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:01.744 23:29:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:01.744 23:29:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.744 23:29:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:01.744 23:29:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.744 23:29:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.744 23:29:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.651 23:29:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:03.651 00:16:03.651 real 0m50.076s 00:16:03.651 user 3m50.995s 00:16:03.651 sys 0m17.295s 00:16:03.651 23:29:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:03.651 23:29:24 -- common/autotest_common.sh@10 -- # set +x 00:16:03.651 ************************************ 00:16:03.651 END TEST nvmf_ns_hotplug_stress 00:16:03.651 ************************************ 00:16:03.651 23:29:24 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:03.651 23:29:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:03.651 23:29:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:03.651 23:29:24 -- common/autotest_common.sh@10 -- # set +x 00:16:03.651 ************************************ 00:16:03.651 START TEST nvmf_connect_stress 00:16:03.651 ************************************ 00:16:03.651 23:29:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:03.910 * Looking for test storage... 00:16:03.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:03.910 23:29:24 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.910 23:29:24 -- nvmf/common.sh@7 -- # uname -s 00:16:03.910 23:29:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.910 23:29:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.910 23:29:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.910 23:29:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.910 23:29:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.910 23:29:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.910 23:29:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.910 23:29:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.910 23:29:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.910 23:29:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.910 23:29:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:03.910 23:29:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:03.910 23:29:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.910 23:29:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.910 23:29:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.910 23:29:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:03.910 23:29:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.910 23:29:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.910 23:29:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.910 23:29:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.911 23:29:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.911 23:29:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.911 23:29:24 -- paths/export.sh@5 -- # export PATH 00:16:03.911 23:29:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.911 23:29:24 -- nvmf/common.sh@46 -- # : 0 00:16:03.911 23:29:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:03.911 23:29:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:03.911 23:29:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:03.911 23:29:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.911 23:29:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.911 23:29:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:03.911 23:29:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:03.911 23:29:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:03.911 23:29:24 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:03.911 23:29:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:03.911 23:29:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.911 23:29:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:03.911 23:29:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:03.911 23:29:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:03.911 23:29:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.911 23:29:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.911 23:29:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.911 23:29:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:03.911 23:29:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:03.911 23:29:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:03.911 23:29:24 -- common/autotest_common.sh@10 -- # set +x 00:16:06.520 23:29:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:06.520 23:29:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:06.520 23:29:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:06.520 23:29:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:06.520 23:29:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:06.520 23:29:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:06.520 23:29:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:06.520 23:29:27 -- nvmf/common.sh@294 -- # net_devs=() 00:16:06.520 23:29:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:06.520 23:29:27 -- nvmf/common.sh@295 -- # e810=() 00:16:06.520 23:29:27 -- nvmf/common.sh@295 -- # local -ga e810 00:16:06.520 23:29:27 -- nvmf/common.sh@296 -- # x722=() 00:16:06.520 23:29:27 -- nvmf/common.sh@296 -- # local -ga x722 00:16:06.520 23:29:27 -- nvmf/common.sh@297 -- # mlx=() 00:16:06.520 23:29:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:06.520 23:29:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.520 23:29:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.520 23:29:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.520 23:29:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.520 23:29:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.520 23:29:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.520 23:29:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.520 23:29:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.520 23:29:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.520 23:29:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.520 23:29:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.520 23:29:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:06.520 23:29:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:06.520 23:29:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:06.520 23:29:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:06.520 23:29:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:06.520 23:29:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:06.520 23:29:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:06.520 23:29:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:06.520 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:06.520 23:29:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:06.520 23:29:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:06.520 23:29:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.520 23:29:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.520 23:29:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:06.520 23:29:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:06.520 23:29:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:06.520 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:06.520 23:29:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:06.520 23:29:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:06.520 23:29:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.520 23:29:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.520 23:29:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:06.520 23:29:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:06.520 23:29:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:06.520 23:29:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:06.521 23:29:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:06.521 23:29:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.521 23:29:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:06.521 23:29:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.521 23:29:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:06.521 Found net devices under 0000:84:00.0: cvl_0_0 00:16:06.521 23:29:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.521 23:29:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:06.521 23:29:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.521 23:29:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:06.521 23:29:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.521 23:29:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:06.521 Found net devices under 0000:84:00.1: cvl_0_1 00:16:06.521 23:29:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.521 23:29:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:06.521 23:29:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:06.521 23:29:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:06.521 23:29:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:06.521 23:29:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:06.521 23:29:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.521 23:29:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.521 23:29:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.521 23:29:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:06.521 23:29:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.521 23:29:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.521 23:29:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:06.521 23:29:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.521 23:29:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.521 23:29:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:06.521 23:29:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:06.521 23:29:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.521 23:29:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.521 23:29:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.521 23:29:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.521 23:29:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:06.521 23:29:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.521 23:29:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.521 23:29:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.521 23:29:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:06.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:16:06.521 00:16:06.521 --- 10.0.0.2 ping statistics --- 00:16:06.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.521 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:16:06.521 23:29:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:16:06.521 00:16:06.521 --- 10.0.0.1 ping statistics --- 00:16:06.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.521 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:16:06.521 23:29:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.521 23:29:27 -- nvmf/common.sh@410 -- # return 0 00:16:06.521 23:29:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:06.521 23:29:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.521 23:29:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:06.521 23:29:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:06.521 23:29:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.521 23:29:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:06.521 23:29:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:06.521 23:29:27 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:06.521 23:29:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:06.521 23:29:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:06.521 23:29:27 -- common/autotest_common.sh@10 -- # set +x 00:16:06.521 23:29:27 -- nvmf/common.sh@469 -- # nvmfpid=217476 00:16:06.521 23:29:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:06.521 23:29:27 -- nvmf/common.sh@470 -- # waitforlisten 217476 00:16:06.521 23:29:27 -- common/autotest_common.sh@819 -- # '[' -z 217476 ']' 00:16:06.521 23:29:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.521 23:29:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:06.521 23:29:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.521 23:29:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:06.521 23:29:27 -- common/autotest_common.sh@10 -- # set +x 00:16:06.780 [2024-07-11 23:29:27.517776] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:06.780 [2024-07-11 23:29:27.517949] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.780 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.780 [2024-07-11 23:29:27.633331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:07.038 [2024-07-11 23:29:27.741828] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:07.038 [2024-07-11 23:29:27.741987] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.038 [2024-07-11 23:29:27.742006] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.038 [2024-07-11 23:29:27.742021] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.038 [2024-07-11 23:29:27.742151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.038 [2024-07-11 23:29:27.742192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.038 [2024-07-11 23:29:27.742196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.972 23:29:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:07.972 23:29:28 -- common/autotest_common.sh@852 -- # return 0 00:16:07.972 23:29:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:07.972 23:29:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:07.972 23:29:28 -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 23:29:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.972 23:29:28 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:07.972 23:29:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.972 23:29:28 -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 [2024-07-11 23:29:28.900405] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.972 23:29:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.972 23:29:28 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:07.972 23:29:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.972 23:29:28 -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 23:29:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.972 23:29:28 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.972 23:29:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.972 23:29:28 -- common/autotest_common.sh@10 -- # set +x 00:16:08.230 [2024-07-11 23:29:28.927333] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.230 23:29:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.230 23:29:28 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:08.230 23:29:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.230 23:29:28 -- common/autotest_common.sh@10 -- # set +x 00:16:08.230 NULL1 00:16:08.230 23:29:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.230 23:29:28 -- target/connect_stress.sh@21 -- # PERF_PID=217635 00:16:08.230 23:29:28 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:08.230 23:29:28 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:08.230 23:29:28 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # seq 1 20 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:08.230 23:29:28 -- target/connect_stress.sh@28 -- # cat 00:16:08.230 23:29:28 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:08.230 23:29:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.230 23:29:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.230 23:29:28 -- common/autotest_common.sh@10 -- # set +x 00:16:08.487 23:29:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.487 23:29:29 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:08.487 23:29:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.487 23:29:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.487 23:29:29 -- common/autotest_common.sh@10 -- # set +x 00:16:08.744 23:29:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.744 23:29:29 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:08.744 23:29:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.744 23:29:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.744 23:29:29 -- common/autotest_common.sh@10 -- # set +x 00:16:09.309 23:29:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.309 23:29:29 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:09.309 23:29:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.309 23:29:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.309 23:29:29 -- common/autotest_common.sh@10 -- # set +x 00:16:09.566 23:29:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.566 23:29:30 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:09.566 23:29:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.566 23:29:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.566 23:29:30 -- common/autotest_common.sh@10 -- # set +x 00:16:09.824 23:29:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.824 23:29:30 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:09.824 23:29:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.824 23:29:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.824 23:29:30 -- common/autotest_common.sh@10 -- # set +x 00:16:10.081 23:29:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.082 23:29:30 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:10.082 23:29:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.082 23:29:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.082 23:29:30 -- common/autotest_common.sh@10 -- # set +x 00:16:10.339 23:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.339 23:29:31 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:10.339 23:29:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.339 23:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.339 23:29:31 -- common/autotest_common.sh@10 -- # set +x 00:16:10.905 23:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.905 23:29:31 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:10.905 23:29:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.905 23:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.905 23:29:31 -- common/autotest_common.sh@10 -- # set +x 00:16:11.163 23:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.163 23:29:31 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:11.163 23:29:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.163 23:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.163 23:29:31 -- common/autotest_common.sh@10 -- # set +x 00:16:11.421 23:29:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.421 23:29:32 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:11.421 23:29:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.421 23:29:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.421 23:29:32 -- common/autotest_common.sh@10 -- # set +x 00:16:11.678 23:29:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.678 23:29:32 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:11.678 23:29:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.678 23:29:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.678 23:29:32 -- common/autotest_common.sh@10 -- # set +x 00:16:11.935 23:29:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.935 23:29:32 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:11.935 23:29:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.936 23:29:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.936 23:29:32 -- common/autotest_common.sh@10 -- # set +x 00:16:12.500 23:29:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.500 23:29:33 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:12.500 23:29:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.500 23:29:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.500 23:29:33 -- common/autotest_common.sh@10 -- # set +x 00:16:12.757 23:29:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.757 23:29:33 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:12.757 23:29:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.757 23:29:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.757 23:29:33 -- common/autotest_common.sh@10 -- # set +x 00:16:13.014 23:29:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.014 23:29:33 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:13.014 23:29:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.014 23:29:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.014 23:29:33 -- common/autotest_common.sh@10 -- # set +x 00:16:13.272 23:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.272 23:29:34 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:13.272 23:29:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.272 23:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.272 23:29:34 -- common/autotest_common.sh@10 -- # set +x 00:16:13.529 23:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.529 23:29:34 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:13.529 23:29:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.529 23:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.529 23:29:34 -- common/autotest_common.sh@10 -- # set +x 00:16:14.094 23:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.094 23:29:34 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:14.094 23:29:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.094 23:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.094 23:29:34 -- common/autotest_common.sh@10 -- # set +x 00:16:14.351 23:29:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.351 23:29:35 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:14.351 23:29:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.351 23:29:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.351 23:29:35 -- common/autotest_common.sh@10 -- # set +x 00:16:14.608 23:29:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.608 23:29:35 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:14.608 23:29:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.608 23:29:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.608 23:29:35 -- common/autotest_common.sh@10 -- # set +x 00:16:14.866 23:29:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.866 23:29:35 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:14.866 23:29:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.866 23:29:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.866 23:29:35 -- common/autotest_common.sh@10 -- # set +x 00:16:15.123 23:29:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.123 23:29:36 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:15.123 23:29:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.123 23:29:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.123 23:29:36 -- common/autotest_common.sh@10 -- # set +x 00:16:15.688 23:29:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.688 23:29:36 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:15.688 23:29:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.688 23:29:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.688 23:29:36 -- common/autotest_common.sh@10 -- # set +x 00:16:15.945 23:29:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.945 23:29:36 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:15.945 23:29:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.945 23:29:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.945 23:29:36 -- common/autotest_common.sh@10 -- # set +x 00:16:16.203 23:29:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:16.203 23:29:37 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:16.203 23:29:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.203 23:29:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:16.203 23:29:37 -- common/autotest_common.sh@10 -- # set +x 00:16:16.461 23:29:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:16.461 23:29:37 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:16.461 23:29:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.461 23:29:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:16.461 23:29:37 -- common/autotest_common.sh@10 -- # set +x 00:16:17.026 23:29:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.026 23:29:37 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:17.026 23:29:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.026 23:29:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.026 23:29:37 -- common/autotest_common.sh@10 -- # set +x 00:16:17.282 23:29:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.282 23:29:37 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:17.282 23:29:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.282 23:29:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.282 23:29:37 -- common/autotest_common.sh@10 -- # set +x 00:16:17.539 23:29:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.539 23:29:38 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:17.539 23:29:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.539 23:29:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.539 23:29:38 -- common/autotest_common.sh@10 -- # set +x 00:16:17.795 23:29:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.795 23:29:38 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:17.795 23:29:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.795 23:29:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.795 23:29:38 -- common/autotest_common.sh@10 -- # set +x 00:16:18.053 23:29:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.053 23:29:38 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:18.053 23:29:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.053 23:29:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.053 23:29:38 -- common/autotest_common.sh@10 -- # set +x 00:16:18.310 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:18.567 23:29:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.567 23:29:39 -- target/connect_stress.sh@34 -- # kill -0 217635 00:16:18.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (217635) - No such process 00:16:18.567 23:29:39 -- target/connect_stress.sh@38 -- # wait 217635 00:16:18.567 23:29:39 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:18.567 23:29:39 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:18.567 23:29:39 -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:18.567 23:29:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:18.567 23:29:39 -- nvmf/common.sh@116 -- # sync 00:16:18.567 23:29:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:18.567 23:29:39 -- nvmf/common.sh@119 -- # set +e 00:16:18.567 23:29:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:18.567 23:29:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:18.567 rmmod nvme_tcp 00:16:18.567 rmmod nvme_fabrics 00:16:18.567 rmmod nvme_keyring 00:16:18.567 23:29:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:18.567 23:29:39 -- nvmf/common.sh@123 -- # set -e 00:16:18.567 23:29:39 -- nvmf/common.sh@124 -- # return 0 00:16:18.568 23:29:39 -- nvmf/common.sh@477 -- # '[' -n 217476 ']' 00:16:18.568 23:29:39 -- nvmf/common.sh@478 -- # killprocess 217476 00:16:18.568 23:29:39 -- common/autotest_common.sh@926 -- # '[' -z 217476 ']' 00:16:18.568 23:29:39 -- common/autotest_common.sh@930 -- # kill -0 217476 00:16:18.568 23:29:39 -- common/autotest_common.sh@931 -- # uname 00:16:18.568 23:29:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:18.568 23:29:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 217476 00:16:18.568 23:29:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:18.568 23:29:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:18.568 23:29:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 217476' 00:16:18.568 killing process with pid 217476 00:16:18.568 23:29:39 -- common/autotest_common.sh@945 -- # kill 217476 00:16:18.568 23:29:39 -- common/autotest_common.sh@950 -- # wait 217476 00:16:18.826 23:29:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:18.826 23:29:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:18.826 23:29:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:18.826 23:29:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:18.826 23:29:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:18.826 23:29:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.826 23:29:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.826 23:29:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.353 23:29:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:21.353 00:16:21.353 real 0m17.129s 00:16:21.353 user 0m42.101s 00:16:21.353 sys 0m6.840s 00:16:21.353 23:29:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.353 23:29:41 -- common/autotest_common.sh@10 -- # set +x 00:16:21.353 ************************************ 00:16:21.353 END TEST nvmf_connect_stress 00:16:21.353 ************************************ 00:16:21.353 23:29:41 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:21.353 23:29:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:21.353 23:29:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:21.353 23:29:41 -- common/autotest_common.sh@10 -- # set +x 00:16:21.353 ************************************ 00:16:21.353 START TEST nvmf_fused_ordering 00:16:21.353 ************************************ 00:16:21.353 23:29:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:21.353 * Looking for test storage... 00:16:21.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.353 23:29:41 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.353 23:29:41 -- nvmf/common.sh@7 -- # uname -s 00:16:21.353 23:29:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.353 23:29:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.353 23:29:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.353 23:29:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.353 23:29:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.353 23:29:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.353 23:29:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.353 23:29:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.353 23:29:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.353 23:29:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.353 23:29:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:21.353 23:29:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:21.353 23:29:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.353 23:29:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.353 23:29:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.353 23:29:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.353 23:29:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.353 23:29:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.353 23:29:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.353 23:29:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.353 23:29:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.353 23:29:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.353 23:29:41 -- paths/export.sh@5 -- # export PATH 00:16:21.353 23:29:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.353 23:29:41 -- nvmf/common.sh@46 -- # : 0 00:16:21.353 23:29:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:21.353 23:29:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:21.353 23:29:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:21.353 23:29:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.353 23:29:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.353 23:29:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:21.353 23:29:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:21.353 23:29:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:21.353 23:29:41 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:21.353 23:29:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:21.353 23:29:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.353 23:29:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:21.353 23:29:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:21.353 23:29:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:21.353 23:29:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.353 23:29:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.353 23:29:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.353 23:29:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:21.353 23:29:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:21.353 23:29:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:21.353 23:29:41 -- common/autotest_common.sh@10 -- # set +x 00:16:23.884 23:29:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:23.884 23:29:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:23.884 23:29:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:23.884 23:29:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:23.884 23:29:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:23.884 23:29:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:23.884 23:29:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:23.884 23:29:44 -- nvmf/common.sh@294 -- # net_devs=() 00:16:23.884 23:29:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:23.884 23:29:44 -- nvmf/common.sh@295 -- # e810=() 00:16:23.884 23:29:44 -- nvmf/common.sh@295 -- # local -ga e810 00:16:23.884 23:29:44 -- nvmf/common.sh@296 -- # x722=() 00:16:23.884 23:29:44 -- nvmf/common.sh@296 -- # local -ga x722 00:16:23.884 23:29:44 -- nvmf/common.sh@297 -- # mlx=() 00:16:23.884 23:29:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:23.884 23:29:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.884 23:29:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.884 23:29:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.884 23:29:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.884 23:29:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.884 23:29:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.884 23:29:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.884 23:29:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.884 23:29:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.884 23:29:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.884 23:29:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.884 23:29:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:23.884 23:29:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:23.884 23:29:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:23.884 23:29:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:23.884 23:29:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:23.884 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:23.884 23:29:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:23.884 23:29:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:23.884 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:23.884 23:29:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:23.884 23:29:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:23.884 23:29:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.884 23:29:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:23.884 23:29:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.884 23:29:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:23.884 Found net devices under 0000:84:00.0: cvl_0_0 00:16:23.884 23:29:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.884 23:29:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:23.884 23:29:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.884 23:29:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:23.884 23:29:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.884 23:29:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:23.884 Found net devices under 0000:84:00.1: cvl_0_1 00:16:23.884 23:29:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.884 23:29:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:23.884 23:29:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:23.884 23:29:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:23.884 23:29:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:23.884 23:29:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.884 23:29:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.884 23:29:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:23.884 23:29:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:23.884 23:29:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:23.884 23:29:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:23.884 23:29:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:23.884 23:29:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:23.884 23:29:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.884 23:29:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:23.884 23:29:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:23.884 23:29:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:23.884 23:29:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:23.884 23:29:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:23.884 23:29:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:23.884 23:29:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:23.884 23:29:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:23.884 23:29:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:23.884 23:29:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:23.884 23:29:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:23.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:16:23.884 00:16:23.884 --- 10.0.0.2 ping statistics --- 00:16:23.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.884 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:16:23.884 23:29:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:23.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:16:23.884 00:16:23.884 --- 10.0.0.1 ping statistics --- 00:16:23.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.884 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:23.884 23:29:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.884 23:29:44 -- nvmf/common.sh@410 -- # return 0 00:16:23.885 23:29:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:23.885 23:29:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.885 23:29:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:23.885 23:29:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:23.885 23:29:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.885 23:29:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:23.885 23:29:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:23.885 23:29:44 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:23.885 23:29:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:23.885 23:29:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:23.885 23:29:44 -- common/autotest_common.sh@10 -- # set +x 00:16:23.885 23:29:44 -- nvmf/common.sh@469 -- # nvmfpid=220908 00:16:23.885 23:29:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:23.885 23:29:44 -- nvmf/common.sh@470 -- # waitforlisten 220908 00:16:23.885 23:29:44 -- common/autotest_common.sh@819 -- # '[' -z 220908 ']' 00:16:23.885 23:29:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.885 23:29:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:23.885 23:29:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.885 23:29:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:23.885 23:29:44 -- common/autotest_common.sh@10 -- # set +x 00:16:23.885 [2024-07-11 23:29:44.747648] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:23.885 [2024-07-11 23:29:44.747743] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.885 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.885 [2024-07-11 23:29:44.831363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.173 [2024-07-11 23:29:44.933320] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:24.173 [2024-07-11 23:29:44.933493] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.173 [2024-07-11 23:29:44.933513] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.173 [2024-07-11 23:29:44.933528] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.173 [2024-07-11 23:29:44.933560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.110 23:29:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:25.110 23:29:45 -- common/autotest_common.sh@852 -- # return 0 00:16:25.110 23:29:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:25.110 23:29:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:25.110 23:29:45 -- common/autotest_common.sh@10 -- # set +x 00:16:25.110 23:29:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.110 23:29:45 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:25.110 23:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.110 23:29:45 -- common/autotest_common.sh@10 -- # set +x 00:16:25.110 [2024-07-11 23:29:45.789115] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.110 23:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.110 23:29:45 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:25.110 23:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.110 23:29:45 -- common/autotest_common.sh@10 -- # set +x 00:16:25.110 23:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.110 23:29:45 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:25.110 23:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.110 23:29:45 -- common/autotest_common.sh@10 -- # set +x 00:16:25.110 [2024-07-11 23:29:45.805329] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.110 23:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.110 23:29:45 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:25.110 23:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.110 23:29:45 -- common/autotest_common.sh@10 -- # set +x 00:16:25.110 NULL1 00:16:25.110 23:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.110 23:29:45 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:25.110 23:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.110 23:29:45 -- common/autotest_common.sh@10 -- # set +x 00:16:25.110 23:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.110 23:29:45 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:25.110 23:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.110 23:29:45 -- common/autotest_common.sh@10 -- # set +x 00:16:25.110 23:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.110 23:29:45 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:25.110 [2024-07-11 23:29:45.849371] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:25.110 [2024-07-11 23:29:45.849419] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221042 ] 00:16:25.110 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.677 Attached to nqn.2016-06.io.spdk:cnode1 00:16:25.677 Namespace ID: 1 size: 1GB 00:16:25.677 fused_ordering(0) 00:16:25.677 fused_ordering(1) 00:16:25.677 fused_ordering(2) 00:16:25.677 fused_ordering(3) 00:16:25.677 fused_ordering(4) 00:16:25.677 fused_ordering(5) 00:16:25.677 fused_ordering(6) 00:16:25.677 fused_ordering(7) 00:16:25.677 fused_ordering(8) 00:16:25.677 fused_ordering(9) 00:16:25.677 fused_ordering(10) 00:16:25.677 fused_ordering(11) 00:16:25.677 fused_ordering(12) 00:16:25.677 fused_ordering(13) 00:16:25.677 fused_ordering(14) 00:16:25.677 fused_ordering(15) 00:16:25.677 fused_ordering(16) 00:16:25.677 fused_ordering(17) 00:16:25.677 fused_ordering(18) 00:16:25.677 fused_ordering(19) 00:16:25.677 fused_ordering(20) 00:16:25.677 fused_ordering(21) 00:16:25.677 fused_ordering(22) 00:16:25.677 fused_ordering(23) 00:16:25.677 fused_ordering(24) 00:16:25.677 fused_ordering(25) 00:16:25.677 fused_ordering(26) 00:16:25.677 fused_ordering(27) 00:16:25.677 fused_ordering(28) 00:16:25.677 fused_ordering(29) 00:16:25.677 fused_ordering(30) 00:16:25.677 fused_ordering(31) 00:16:25.677 fused_ordering(32) 00:16:25.677 fused_ordering(33) 00:16:25.677 fused_ordering(34) 00:16:25.677 fused_ordering(35) 00:16:25.677 fused_ordering(36) 00:16:25.677 fused_ordering(37) 00:16:25.677 fused_ordering(38) 00:16:25.677 fused_ordering(39) 00:16:25.677 fused_ordering(40) 00:16:25.677 fused_ordering(41) 00:16:25.677 fused_ordering(42) 00:16:25.677 fused_ordering(43) 00:16:25.677 fused_ordering(44) 00:16:25.677 fused_ordering(45) 00:16:25.677 fused_ordering(46) 00:16:25.677 fused_ordering(47) 00:16:25.677 fused_ordering(48) 00:16:25.677 fused_ordering(49) 00:16:25.677 fused_ordering(50) 00:16:25.677 fused_ordering(51) 00:16:25.677 fused_ordering(52) 00:16:25.677 fused_ordering(53) 00:16:25.677 fused_ordering(54) 00:16:25.677 fused_ordering(55) 00:16:25.677 fused_ordering(56) 00:16:25.677 fused_ordering(57) 00:16:25.677 fused_ordering(58) 00:16:25.677 fused_ordering(59) 00:16:25.677 fused_ordering(60) 00:16:25.677 fused_ordering(61) 00:16:25.677 fused_ordering(62) 00:16:25.677 fused_ordering(63) 00:16:25.677 fused_ordering(64) 00:16:25.677 fused_ordering(65) 00:16:25.677 fused_ordering(66) 00:16:25.677 fused_ordering(67) 00:16:25.677 fused_ordering(68) 00:16:25.677 fused_ordering(69) 00:16:25.677 fused_ordering(70) 00:16:25.677 fused_ordering(71) 00:16:25.677 fused_ordering(72) 00:16:25.677 fused_ordering(73) 00:16:25.677 fused_ordering(74) 00:16:25.677 fused_ordering(75) 00:16:25.677 fused_ordering(76) 00:16:25.677 fused_ordering(77) 00:16:25.677 fused_ordering(78) 00:16:25.677 fused_ordering(79) 00:16:25.677 fused_ordering(80) 00:16:25.677 fused_ordering(81) 00:16:25.677 fused_ordering(82) 00:16:25.677 fused_ordering(83) 00:16:25.677 fused_ordering(84) 00:16:25.677 fused_ordering(85) 00:16:25.677 fused_ordering(86) 00:16:25.677 fused_ordering(87) 00:16:25.677 fused_ordering(88) 00:16:25.677 fused_ordering(89) 00:16:25.677 fused_ordering(90) 00:16:25.677 fused_ordering(91) 00:16:25.677 fused_ordering(92) 00:16:25.677 fused_ordering(93) 00:16:25.677 fused_ordering(94) 00:16:25.677 fused_ordering(95) 00:16:25.677 fused_ordering(96) 00:16:25.677 fused_ordering(97) 00:16:25.677 fused_ordering(98) 00:16:25.677 fused_ordering(99) 00:16:25.677 fused_ordering(100) 00:16:25.677 fused_ordering(101) 00:16:25.677 fused_ordering(102) 00:16:25.677 fused_ordering(103) 00:16:25.677 fused_ordering(104) 00:16:25.677 fused_ordering(105) 00:16:25.677 fused_ordering(106) 00:16:25.677 fused_ordering(107) 00:16:25.677 fused_ordering(108) 00:16:25.677 fused_ordering(109) 00:16:25.677 fused_ordering(110) 00:16:25.677 fused_ordering(111) 00:16:25.677 fused_ordering(112) 00:16:25.677 fused_ordering(113) 00:16:25.678 fused_ordering(114) 00:16:25.678 fused_ordering(115) 00:16:25.678 fused_ordering(116) 00:16:25.678 fused_ordering(117) 00:16:25.678 fused_ordering(118) 00:16:25.678 fused_ordering(119) 00:16:25.678 fused_ordering(120) 00:16:25.678 fused_ordering(121) 00:16:25.678 fused_ordering(122) 00:16:25.678 fused_ordering(123) 00:16:25.678 fused_ordering(124) 00:16:25.678 fused_ordering(125) 00:16:25.678 fused_ordering(126) 00:16:25.678 fused_ordering(127) 00:16:25.678 fused_ordering(128) 00:16:25.678 fused_ordering(129) 00:16:25.678 fused_ordering(130) 00:16:25.678 fused_ordering(131) 00:16:25.678 fused_ordering(132) 00:16:25.678 fused_ordering(133) 00:16:25.678 fused_ordering(134) 00:16:25.678 fused_ordering(135) 00:16:25.678 fused_ordering(136) 00:16:25.678 fused_ordering(137) 00:16:25.678 fused_ordering(138) 00:16:25.678 fused_ordering(139) 00:16:25.678 fused_ordering(140) 00:16:25.678 fused_ordering(141) 00:16:25.678 fused_ordering(142) 00:16:25.678 fused_ordering(143) 00:16:25.678 fused_ordering(144) 00:16:25.678 fused_ordering(145) 00:16:25.678 fused_ordering(146) 00:16:25.678 fused_ordering(147) 00:16:25.678 fused_ordering(148) 00:16:25.678 fused_ordering(149) 00:16:25.678 fused_ordering(150) 00:16:25.678 fused_ordering(151) 00:16:25.678 fused_ordering(152) 00:16:25.678 fused_ordering(153) 00:16:25.678 fused_ordering(154) 00:16:25.678 fused_ordering(155) 00:16:25.678 fused_ordering(156) 00:16:25.678 fused_ordering(157) 00:16:25.678 fused_ordering(158) 00:16:25.678 fused_ordering(159) 00:16:25.678 fused_ordering(160) 00:16:25.678 fused_ordering(161) 00:16:25.678 fused_ordering(162) 00:16:25.678 fused_ordering(163) 00:16:25.678 fused_ordering(164) 00:16:25.678 fused_ordering(165) 00:16:25.678 fused_ordering(166) 00:16:25.678 fused_ordering(167) 00:16:25.678 fused_ordering(168) 00:16:25.678 fused_ordering(169) 00:16:25.678 fused_ordering(170) 00:16:25.678 fused_ordering(171) 00:16:25.678 fused_ordering(172) 00:16:25.678 fused_ordering(173) 00:16:25.678 fused_ordering(174) 00:16:25.678 fused_ordering(175) 00:16:25.678 fused_ordering(176) 00:16:25.678 fused_ordering(177) 00:16:25.678 fused_ordering(178) 00:16:25.678 fused_ordering(179) 00:16:25.678 fused_ordering(180) 00:16:25.678 fused_ordering(181) 00:16:25.678 fused_ordering(182) 00:16:25.678 fused_ordering(183) 00:16:25.678 fused_ordering(184) 00:16:25.678 fused_ordering(185) 00:16:25.678 fused_ordering(186) 00:16:25.678 fused_ordering(187) 00:16:25.678 fused_ordering(188) 00:16:25.678 fused_ordering(189) 00:16:25.678 fused_ordering(190) 00:16:25.678 fused_ordering(191) 00:16:25.678 fused_ordering(192) 00:16:25.678 fused_ordering(193) 00:16:25.678 fused_ordering(194) 00:16:25.678 fused_ordering(195) 00:16:25.678 fused_ordering(196) 00:16:25.678 fused_ordering(197) 00:16:25.678 fused_ordering(198) 00:16:25.678 fused_ordering(199) 00:16:25.678 fused_ordering(200) 00:16:25.678 fused_ordering(201) 00:16:25.678 fused_ordering(202) 00:16:25.678 fused_ordering(203) 00:16:25.678 fused_ordering(204) 00:16:25.678 fused_ordering(205) 00:16:26.244 fused_ordering(206) 00:16:26.244 fused_ordering(207) 00:16:26.244 fused_ordering(208) 00:16:26.244 fused_ordering(209) 00:16:26.244 fused_ordering(210) 00:16:26.244 fused_ordering(211) 00:16:26.244 fused_ordering(212) 00:16:26.244 fused_ordering(213) 00:16:26.244 fused_ordering(214) 00:16:26.244 fused_ordering(215) 00:16:26.244 fused_ordering(216) 00:16:26.244 fused_ordering(217) 00:16:26.244 fused_ordering(218) 00:16:26.244 fused_ordering(219) 00:16:26.244 fused_ordering(220) 00:16:26.244 fused_ordering(221) 00:16:26.244 fused_ordering(222) 00:16:26.244 fused_ordering(223) 00:16:26.244 fused_ordering(224) 00:16:26.244 fused_ordering(225) 00:16:26.244 fused_ordering(226) 00:16:26.244 fused_ordering(227) 00:16:26.244 fused_ordering(228) 00:16:26.244 fused_ordering(229) 00:16:26.244 fused_ordering(230) 00:16:26.244 fused_ordering(231) 00:16:26.244 fused_ordering(232) 00:16:26.244 fused_ordering(233) 00:16:26.244 fused_ordering(234) 00:16:26.244 fused_ordering(235) 00:16:26.244 fused_ordering(236) 00:16:26.244 fused_ordering(237) 00:16:26.244 fused_ordering(238) 00:16:26.244 fused_ordering(239) 00:16:26.244 fused_ordering(240) 00:16:26.244 fused_ordering(241) 00:16:26.244 fused_ordering(242) 00:16:26.244 fused_ordering(243) 00:16:26.244 fused_ordering(244) 00:16:26.244 fused_ordering(245) 00:16:26.244 fused_ordering(246) 00:16:26.244 fused_ordering(247) 00:16:26.244 fused_ordering(248) 00:16:26.244 fused_ordering(249) 00:16:26.244 fused_ordering(250) 00:16:26.244 fused_ordering(251) 00:16:26.244 fused_ordering(252) 00:16:26.244 fused_ordering(253) 00:16:26.244 fused_ordering(254) 00:16:26.244 fused_ordering(255) 00:16:26.244 fused_ordering(256) 00:16:26.244 fused_ordering(257) 00:16:26.244 fused_ordering(258) 00:16:26.244 fused_ordering(259) 00:16:26.244 fused_ordering(260) 00:16:26.244 fused_ordering(261) 00:16:26.244 fused_ordering(262) 00:16:26.244 fused_ordering(263) 00:16:26.244 fused_ordering(264) 00:16:26.244 fused_ordering(265) 00:16:26.244 fused_ordering(266) 00:16:26.244 fused_ordering(267) 00:16:26.244 fused_ordering(268) 00:16:26.244 fused_ordering(269) 00:16:26.244 fused_ordering(270) 00:16:26.244 fused_ordering(271) 00:16:26.244 fused_ordering(272) 00:16:26.244 fused_ordering(273) 00:16:26.244 fused_ordering(274) 00:16:26.244 fused_ordering(275) 00:16:26.244 fused_ordering(276) 00:16:26.244 fused_ordering(277) 00:16:26.244 fused_ordering(278) 00:16:26.244 fused_ordering(279) 00:16:26.244 fused_ordering(280) 00:16:26.244 fused_ordering(281) 00:16:26.244 fused_ordering(282) 00:16:26.244 fused_ordering(283) 00:16:26.244 fused_ordering(284) 00:16:26.245 fused_ordering(285) 00:16:26.245 fused_ordering(286) 00:16:26.245 fused_ordering(287) 00:16:26.245 fused_ordering(288) 00:16:26.245 fused_ordering(289) 00:16:26.245 fused_ordering(290) 00:16:26.245 fused_ordering(291) 00:16:26.245 fused_ordering(292) 00:16:26.245 fused_ordering(293) 00:16:26.245 fused_ordering(294) 00:16:26.245 fused_ordering(295) 00:16:26.245 fused_ordering(296) 00:16:26.245 fused_ordering(297) 00:16:26.245 fused_ordering(298) 00:16:26.245 fused_ordering(299) 00:16:26.245 fused_ordering(300) 00:16:26.245 fused_ordering(301) 00:16:26.245 fused_ordering(302) 00:16:26.245 fused_ordering(303) 00:16:26.245 fused_ordering(304) 00:16:26.245 fused_ordering(305) 00:16:26.245 fused_ordering(306) 00:16:26.245 fused_ordering(307) 00:16:26.245 fused_ordering(308) 00:16:26.245 fused_ordering(309) 00:16:26.245 fused_ordering(310) 00:16:26.245 fused_ordering(311) 00:16:26.245 fused_ordering(312) 00:16:26.245 fused_ordering(313) 00:16:26.245 fused_ordering(314) 00:16:26.245 fused_ordering(315) 00:16:26.245 fused_ordering(316) 00:16:26.245 fused_ordering(317) 00:16:26.245 fused_ordering(318) 00:16:26.245 fused_ordering(319) 00:16:26.245 fused_ordering(320) 00:16:26.245 fused_ordering(321) 00:16:26.245 fused_ordering(322) 00:16:26.245 fused_ordering(323) 00:16:26.245 fused_ordering(324) 00:16:26.245 fused_ordering(325) 00:16:26.245 fused_ordering(326) 00:16:26.245 fused_ordering(327) 00:16:26.245 fused_ordering(328) 00:16:26.245 fused_ordering(329) 00:16:26.245 fused_ordering(330) 00:16:26.245 fused_ordering(331) 00:16:26.245 fused_ordering(332) 00:16:26.245 fused_ordering(333) 00:16:26.245 fused_ordering(334) 00:16:26.245 fused_ordering(335) 00:16:26.245 fused_ordering(336) 00:16:26.245 fused_ordering(337) 00:16:26.245 fused_ordering(338) 00:16:26.245 fused_ordering(339) 00:16:26.245 fused_ordering(340) 00:16:26.245 fused_ordering(341) 00:16:26.245 fused_ordering(342) 00:16:26.245 fused_ordering(343) 00:16:26.245 fused_ordering(344) 00:16:26.245 fused_ordering(345) 00:16:26.245 fused_ordering(346) 00:16:26.245 fused_ordering(347) 00:16:26.245 fused_ordering(348) 00:16:26.245 fused_ordering(349) 00:16:26.245 fused_ordering(350) 00:16:26.245 fused_ordering(351) 00:16:26.245 fused_ordering(352) 00:16:26.245 fused_ordering(353) 00:16:26.245 fused_ordering(354) 00:16:26.245 fused_ordering(355) 00:16:26.245 fused_ordering(356) 00:16:26.245 fused_ordering(357) 00:16:26.245 fused_ordering(358) 00:16:26.245 fused_ordering(359) 00:16:26.245 fused_ordering(360) 00:16:26.245 fused_ordering(361) 00:16:26.245 fused_ordering(362) 00:16:26.245 fused_ordering(363) 00:16:26.245 fused_ordering(364) 00:16:26.245 fused_ordering(365) 00:16:26.245 fused_ordering(366) 00:16:26.245 fused_ordering(367) 00:16:26.245 fused_ordering(368) 00:16:26.245 fused_ordering(369) 00:16:26.245 fused_ordering(370) 00:16:26.245 fused_ordering(371) 00:16:26.245 fused_ordering(372) 00:16:26.245 fused_ordering(373) 00:16:26.245 fused_ordering(374) 00:16:26.245 fused_ordering(375) 00:16:26.245 fused_ordering(376) 00:16:26.245 fused_ordering(377) 00:16:26.245 fused_ordering(378) 00:16:26.245 fused_ordering(379) 00:16:26.245 fused_ordering(380) 00:16:26.245 fused_ordering(381) 00:16:26.245 fused_ordering(382) 00:16:26.245 fused_ordering(383) 00:16:26.245 fused_ordering(384) 00:16:26.245 fused_ordering(385) 00:16:26.245 fused_ordering(386) 00:16:26.245 fused_ordering(387) 00:16:26.245 fused_ordering(388) 00:16:26.245 fused_ordering(389) 00:16:26.245 fused_ordering(390) 00:16:26.245 fused_ordering(391) 00:16:26.245 fused_ordering(392) 00:16:26.245 fused_ordering(393) 00:16:26.245 fused_ordering(394) 00:16:26.245 fused_ordering(395) 00:16:26.245 fused_ordering(396) 00:16:26.245 fused_ordering(397) 00:16:26.245 fused_ordering(398) 00:16:26.245 fused_ordering(399) 00:16:26.245 fused_ordering(400) 00:16:26.245 fused_ordering(401) 00:16:26.245 fused_ordering(402) 00:16:26.245 fused_ordering(403) 00:16:26.245 fused_ordering(404) 00:16:26.245 fused_ordering(405) 00:16:26.245 fused_ordering(406) 00:16:26.245 fused_ordering(407) 00:16:26.245 fused_ordering(408) 00:16:26.245 fused_ordering(409) 00:16:26.245 fused_ordering(410) 00:16:27.179 fused_ordering(411) 00:16:27.179 fused_ordering(412) 00:16:27.179 fused_ordering(413) 00:16:27.179 fused_ordering(414) 00:16:27.179 fused_ordering(415) 00:16:27.179 fused_ordering(416) 00:16:27.179 fused_ordering(417) 00:16:27.179 fused_ordering(418) 00:16:27.179 fused_ordering(419) 00:16:27.179 fused_ordering(420) 00:16:27.179 fused_ordering(421) 00:16:27.179 fused_ordering(422) 00:16:27.179 fused_ordering(423) 00:16:27.179 fused_ordering(424) 00:16:27.179 fused_ordering(425) 00:16:27.179 fused_ordering(426) 00:16:27.179 fused_ordering(427) 00:16:27.179 fused_ordering(428) 00:16:27.179 fused_ordering(429) 00:16:27.179 fused_ordering(430) 00:16:27.179 fused_ordering(431) 00:16:27.179 fused_ordering(432) 00:16:27.179 fused_ordering(433) 00:16:27.179 fused_ordering(434) 00:16:27.179 fused_ordering(435) 00:16:27.179 fused_ordering(436) 00:16:27.179 fused_ordering(437) 00:16:27.179 fused_ordering(438) 00:16:27.179 fused_ordering(439) 00:16:27.179 fused_ordering(440) 00:16:27.179 fused_ordering(441) 00:16:27.179 fused_ordering(442) 00:16:27.179 fused_ordering(443) 00:16:27.179 fused_ordering(444) 00:16:27.179 fused_ordering(445) 00:16:27.179 fused_ordering(446) 00:16:27.179 fused_ordering(447) 00:16:27.179 fused_ordering(448) 00:16:27.179 fused_ordering(449) 00:16:27.179 fused_ordering(450) 00:16:27.179 fused_ordering(451) 00:16:27.179 fused_ordering(452) 00:16:27.179 fused_ordering(453) 00:16:27.179 fused_ordering(454) 00:16:27.179 fused_ordering(455) 00:16:27.179 fused_ordering(456) 00:16:27.179 fused_ordering(457) 00:16:27.179 fused_ordering(458) 00:16:27.179 fused_ordering(459) 00:16:27.179 fused_ordering(460) 00:16:27.179 fused_ordering(461) 00:16:27.179 fused_ordering(462) 00:16:27.179 fused_ordering(463) 00:16:27.179 fused_ordering(464) 00:16:27.179 fused_ordering(465) 00:16:27.179 fused_ordering(466) 00:16:27.179 fused_ordering(467) 00:16:27.179 fused_ordering(468) 00:16:27.179 fused_ordering(469) 00:16:27.179 fused_ordering(470) 00:16:27.179 fused_ordering(471) 00:16:27.179 fused_ordering(472) 00:16:27.179 fused_ordering(473) 00:16:27.179 fused_ordering(474) 00:16:27.179 fused_ordering(475) 00:16:27.179 fused_ordering(476) 00:16:27.179 fused_ordering(477) 00:16:27.179 fused_ordering(478) 00:16:27.179 fused_ordering(479) 00:16:27.179 fused_ordering(480) 00:16:27.179 fused_ordering(481) 00:16:27.179 fused_ordering(482) 00:16:27.179 fused_ordering(483) 00:16:27.179 fused_ordering(484) 00:16:27.179 fused_ordering(485) 00:16:27.179 fused_ordering(486) 00:16:27.179 fused_ordering(487) 00:16:27.179 fused_ordering(488) 00:16:27.179 fused_ordering(489) 00:16:27.179 fused_ordering(490) 00:16:27.179 fused_ordering(491) 00:16:27.179 fused_ordering(492) 00:16:27.180 fused_ordering(493) 00:16:27.180 fused_ordering(494) 00:16:27.180 fused_ordering(495) 00:16:27.180 fused_ordering(496) 00:16:27.180 fused_ordering(497) 00:16:27.180 fused_ordering(498) 00:16:27.180 fused_ordering(499) 00:16:27.180 fused_ordering(500) 00:16:27.180 fused_ordering(501) 00:16:27.180 fused_ordering(502) 00:16:27.180 fused_ordering(503) 00:16:27.180 fused_ordering(504) 00:16:27.180 fused_ordering(505) 00:16:27.180 fused_ordering(506) 00:16:27.180 fused_ordering(507) 00:16:27.180 fused_ordering(508) 00:16:27.180 fused_ordering(509) 00:16:27.180 fused_ordering(510) 00:16:27.180 fused_ordering(511) 00:16:27.180 fused_ordering(512) 00:16:27.180 fused_ordering(513) 00:16:27.180 fused_ordering(514) 00:16:27.180 fused_ordering(515) 00:16:27.180 fused_ordering(516) 00:16:27.180 fused_ordering(517) 00:16:27.180 fused_ordering(518) 00:16:27.180 fused_ordering(519) 00:16:27.180 fused_ordering(520) 00:16:27.180 fused_ordering(521) 00:16:27.180 fused_ordering(522) 00:16:27.180 fused_ordering(523) 00:16:27.180 fused_ordering(524) 00:16:27.180 fused_ordering(525) 00:16:27.180 fused_ordering(526) 00:16:27.180 fused_ordering(527) 00:16:27.180 fused_ordering(528) 00:16:27.180 fused_ordering(529) 00:16:27.180 fused_ordering(530) 00:16:27.180 fused_ordering(531) 00:16:27.180 fused_ordering(532) 00:16:27.180 fused_ordering(533) 00:16:27.180 fused_ordering(534) 00:16:27.180 fused_ordering(535) 00:16:27.180 fused_ordering(536) 00:16:27.180 fused_ordering(537) 00:16:27.180 fused_ordering(538) 00:16:27.180 fused_ordering(539) 00:16:27.180 fused_ordering(540) 00:16:27.180 fused_ordering(541) 00:16:27.180 fused_ordering(542) 00:16:27.180 fused_ordering(543) 00:16:27.180 fused_ordering(544) 00:16:27.180 fused_ordering(545) 00:16:27.180 fused_ordering(546) 00:16:27.180 fused_ordering(547) 00:16:27.180 fused_ordering(548) 00:16:27.180 fused_ordering(549) 00:16:27.180 fused_ordering(550) 00:16:27.180 fused_ordering(551) 00:16:27.180 fused_ordering(552) 00:16:27.180 fused_ordering(553) 00:16:27.180 fused_ordering(554) 00:16:27.180 fused_ordering(555) 00:16:27.180 fused_ordering(556) 00:16:27.180 fused_ordering(557) 00:16:27.180 fused_ordering(558) 00:16:27.180 fused_ordering(559) 00:16:27.180 fused_ordering(560) 00:16:27.180 fused_ordering(561) 00:16:27.180 fused_ordering(562) 00:16:27.180 fused_ordering(563) 00:16:27.180 fused_ordering(564) 00:16:27.180 fused_ordering(565) 00:16:27.180 fused_ordering(566) 00:16:27.180 fused_ordering(567) 00:16:27.180 fused_ordering(568) 00:16:27.180 fused_ordering(569) 00:16:27.180 fused_ordering(570) 00:16:27.180 fused_ordering(571) 00:16:27.180 fused_ordering(572) 00:16:27.180 fused_ordering(573) 00:16:27.180 fused_ordering(574) 00:16:27.180 fused_ordering(575) 00:16:27.180 fused_ordering(576) 00:16:27.180 fused_ordering(577) 00:16:27.180 fused_ordering(578) 00:16:27.180 fused_ordering(579) 00:16:27.180 fused_ordering(580) 00:16:27.180 fused_ordering(581) 00:16:27.180 fused_ordering(582) 00:16:27.180 fused_ordering(583) 00:16:27.180 fused_ordering(584) 00:16:27.180 fused_ordering(585) 00:16:27.180 fused_ordering(586) 00:16:27.180 fused_ordering(587) 00:16:27.180 fused_ordering(588) 00:16:27.180 fused_ordering(589) 00:16:27.180 fused_ordering(590) 00:16:27.180 fused_ordering(591) 00:16:27.180 fused_ordering(592) 00:16:27.180 fused_ordering(593) 00:16:27.180 fused_ordering(594) 00:16:27.180 fused_ordering(595) 00:16:27.180 fused_ordering(596) 00:16:27.180 fused_ordering(597) 00:16:27.180 fused_ordering(598) 00:16:27.180 fused_ordering(599) 00:16:27.180 fused_ordering(600) 00:16:27.180 fused_ordering(601) 00:16:27.180 fused_ordering(602) 00:16:27.180 fused_ordering(603) 00:16:27.180 fused_ordering(604) 00:16:27.180 fused_ordering(605) 00:16:27.180 fused_ordering(606) 00:16:27.180 fused_ordering(607) 00:16:27.180 fused_ordering(608) 00:16:27.180 fused_ordering(609) 00:16:27.180 fused_ordering(610) 00:16:27.180 fused_ordering(611) 00:16:27.180 fused_ordering(612) 00:16:27.180 fused_ordering(613) 00:16:27.180 fused_ordering(614) 00:16:27.180 fused_ordering(615) 00:16:27.746 fused_ordering(616) 00:16:27.746 fused_ordering(617) 00:16:27.746 fused_ordering(618) 00:16:27.746 fused_ordering(619) 00:16:27.746 fused_ordering(620) 00:16:27.746 fused_ordering(621) 00:16:27.746 fused_ordering(622) 00:16:27.746 fused_ordering(623) 00:16:27.746 fused_ordering(624) 00:16:27.746 fused_ordering(625) 00:16:27.746 fused_ordering(626) 00:16:27.746 fused_ordering(627) 00:16:27.746 fused_ordering(628) 00:16:27.746 fused_ordering(629) 00:16:27.746 fused_ordering(630) 00:16:27.746 fused_ordering(631) 00:16:27.746 fused_ordering(632) 00:16:27.746 fused_ordering(633) 00:16:27.746 fused_ordering(634) 00:16:27.746 fused_ordering(635) 00:16:27.746 fused_ordering(636) 00:16:27.746 fused_ordering(637) 00:16:27.746 fused_ordering(638) 00:16:27.746 fused_ordering(639) 00:16:27.746 fused_ordering(640) 00:16:27.746 fused_ordering(641) 00:16:27.746 fused_ordering(642) 00:16:27.746 fused_ordering(643) 00:16:27.746 fused_ordering(644) 00:16:27.746 fused_ordering(645) 00:16:27.746 fused_ordering(646) 00:16:27.746 fused_ordering(647) 00:16:27.746 fused_ordering(648) 00:16:27.746 fused_ordering(649) 00:16:27.746 fused_ordering(650) 00:16:27.746 fused_ordering(651) 00:16:27.746 fused_ordering(652) 00:16:27.746 fused_ordering(653) 00:16:27.746 fused_ordering(654) 00:16:27.746 fused_ordering(655) 00:16:27.746 fused_ordering(656) 00:16:27.746 fused_ordering(657) 00:16:27.746 fused_ordering(658) 00:16:27.746 fused_ordering(659) 00:16:27.746 fused_ordering(660) 00:16:27.746 fused_ordering(661) 00:16:27.746 fused_ordering(662) 00:16:27.746 fused_ordering(663) 00:16:27.746 fused_ordering(664) 00:16:27.746 fused_ordering(665) 00:16:27.746 fused_ordering(666) 00:16:27.746 fused_ordering(667) 00:16:27.746 fused_ordering(668) 00:16:27.746 fused_ordering(669) 00:16:27.746 fused_ordering(670) 00:16:27.746 fused_ordering(671) 00:16:27.746 fused_ordering(672) 00:16:27.746 fused_ordering(673) 00:16:27.746 fused_ordering(674) 00:16:27.746 fused_ordering(675) 00:16:27.746 fused_ordering(676) 00:16:27.746 fused_ordering(677) 00:16:27.746 fused_ordering(678) 00:16:27.746 fused_ordering(679) 00:16:27.746 fused_ordering(680) 00:16:27.746 fused_ordering(681) 00:16:27.746 fused_ordering(682) 00:16:27.746 fused_ordering(683) 00:16:27.746 fused_ordering(684) 00:16:27.746 fused_ordering(685) 00:16:27.746 fused_ordering(686) 00:16:27.746 fused_ordering(687) 00:16:27.746 fused_ordering(688) 00:16:27.746 fused_ordering(689) 00:16:27.746 fused_ordering(690) 00:16:27.746 fused_ordering(691) 00:16:27.746 fused_ordering(692) 00:16:27.746 fused_ordering(693) 00:16:27.746 fused_ordering(694) 00:16:27.746 fused_ordering(695) 00:16:27.746 fused_ordering(696) 00:16:27.746 fused_ordering(697) 00:16:27.746 fused_ordering(698) 00:16:27.746 fused_ordering(699) 00:16:27.746 fused_ordering(700) 00:16:27.746 fused_ordering(701) 00:16:27.746 fused_ordering(702) 00:16:27.746 fused_ordering(703) 00:16:27.746 fused_ordering(704) 00:16:27.746 fused_ordering(705) 00:16:27.746 fused_ordering(706) 00:16:27.746 fused_ordering(707) 00:16:27.746 fused_ordering(708) 00:16:27.746 fused_ordering(709) 00:16:27.746 fused_ordering(710) 00:16:27.746 fused_ordering(711) 00:16:27.746 fused_ordering(712) 00:16:27.746 fused_ordering(713) 00:16:27.746 fused_ordering(714) 00:16:27.746 fused_ordering(715) 00:16:27.746 fused_ordering(716) 00:16:27.746 fused_ordering(717) 00:16:27.746 fused_ordering(718) 00:16:27.746 fused_ordering(719) 00:16:27.746 fused_ordering(720) 00:16:27.746 fused_ordering(721) 00:16:27.746 fused_ordering(722) 00:16:27.746 fused_ordering(723) 00:16:27.746 fused_ordering(724) 00:16:27.746 fused_ordering(725) 00:16:27.746 fused_ordering(726) 00:16:27.746 fused_ordering(727) 00:16:27.746 fused_ordering(728) 00:16:27.746 fused_ordering(729) 00:16:27.746 fused_ordering(730) 00:16:27.746 fused_ordering(731) 00:16:27.746 fused_ordering(732) 00:16:27.746 fused_ordering(733) 00:16:27.746 fused_ordering(734) 00:16:27.746 fused_ordering(735) 00:16:27.746 fused_ordering(736) 00:16:27.746 fused_ordering(737) 00:16:27.746 fused_ordering(738) 00:16:27.746 fused_ordering(739) 00:16:27.746 fused_ordering(740) 00:16:27.746 fused_ordering(741) 00:16:27.746 fused_ordering(742) 00:16:27.746 fused_ordering(743) 00:16:27.746 fused_ordering(744) 00:16:27.746 fused_ordering(745) 00:16:27.746 fused_ordering(746) 00:16:27.746 fused_ordering(747) 00:16:27.746 fused_ordering(748) 00:16:27.746 fused_ordering(749) 00:16:27.746 fused_ordering(750) 00:16:27.746 fused_ordering(751) 00:16:27.746 fused_ordering(752) 00:16:27.746 fused_ordering(753) 00:16:27.746 fused_ordering(754) 00:16:27.746 fused_ordering(755) 00:16:27.746 fused_ordering(756) 00:16:27.746 fused_ordering(757) 00:16:27.746 fused_ordering(758) 00:16:27.746 fused_ordering(759) 00:16:27.746 fused_ordering(760) 00:16:27.746 fused_ordering(761) 00:16:27.746 fused_ordering(762) 00:16:27.746 fused_ordering(763) 00:16:27.746 fused_ordering(764) 00:16:27.746 fused_ordering(765) 00:16:27.746 fused_ordering(766) 00:16:27.746 fused_ordering(767) 00:16:27.746 fused_ordering(768) 00:16:27.746 fused_ordering(769) 00:16:27.746 fused_ordering(770) 00:16:27.746 fused_ordering(771) 00:16:27.746 fused_ordering(772) 00:16:27.746 fused_ordering(773) 00:16:27.746 fused_ordering(774) 00:16:27.746 fused_ordering(775) 00:16:27.746 fused_ordering(776) 00:16:27.746 fused_ordering(777) 00:16:27.746 fused_ordering(778) 00:16:27.746 fused_ordering(779) 00:16:27.746 fused_ordering(780) 00:16:27.746 fused_ordering(781) 00:16:27.746 fused_ordering(782) 00:16:27.746 fused_ordering(783) 00:16:27.746 fused_ordering(784) 00:16:27.746 fused_ordering(785) 00:16:27.746 fused_ordering(786) 00:16:27.746 fused_ordering(787) 00:16:27.746 fused_ordering(788) 00:16:27.746 fused_ordering(789) 00:16:27.746 fused_ordering(790) 00:16:27.746 fused_ordering(791) 00:16:27.746 fused_ordering(792) 00:16:27.746 fused_ordering(793) 00:16:27.746 fused_ordering(794) 00:16:27.746 fused_ordering(795) 00:16:27.746 fused_ordering(796) 00:16:27.746 fused_ordering(797) 00:16:27.746 fused_ordering(798) 00:16:27.746 fused_ordering(799) 00:16:27.746 fused_ordering(800) 00:16:27.746 fused_ordering(801) 00:16:27.746 fused_ordering(802) 00:16:27.746 fused_ordering(803) 00:16:27.746 fused_ordering(804) 00:16:27.746 fused_ordering(805) 00:16:27.746 fused_ordering(806) 00:16:27.746 fused_ordering(807) 00:16:27.746 fused_ordering(808) 00:16:27.746 fused_ordering(809) 00:16:27.746 fused_ordering(810) 00:16:27.746 fused_ordering(811) 00:16:27.746 fused_ordering(812) 00:16:27.746 fused_ordering(813) 00:16:27.746 fused_ordering(814) 00:16:27.746 fused_ordering(815) 00:16:27.746 fused_ordering(816) 00:16:27.746 fused_ordering(817) 00:16:27.746 fused_ordering(818) 00:16:27.746 fused_ordering(819) 00:16:27.746 fused_ordering(820) 00:16:28.680 fused_ordering(821) 00:16:28.680 fused_ordering(822) 00:16:28.680 fused_ordering(823) 00:16:28.680 fused_ordering(824) 00:16:28.680 fused_ordering(825) 00:16:28.680 fused_ordering(826) 00:16:28.680 fused_ordering(827) 00:16:28.680 fused_ordering(828) 00:16:28.680 fused_ordering(829) 00:16:28.680 fused_ordering(830) 00:16:28.680 fused_ordering(831) 00:16:28.680 fused_ordering(832) 00:16:28.680 fused_ordering(833) 00:16:28.680 fused_ordering(834) 00:16:28.680 fused_ordering(835) 00:16:28.680 fused_ordering(836) 00:16:28.680 fused_ordering(837) 00:16:28.680 fused_ordering(838) 00:16:28.680 fused_ordering(839) 00:16:28.680 fused_ordering(840) 00:16:28.680 fused_ordering(841) 00:16:28.680 fused_ordering(842) 00:16:28.680 fused_ordering(843) 00:16:28.680 fused_ordering(844) 00:16:28.680 fused_ordering(845) 00:16:28.680 fused_ordering(846) 00:16:28.680 fused_ordering(847) 00:16:28.680 fused_ordering(848) 00:16:28.680 fused_ordering(849) 00:16:28.680 fused_ordering(850) 00:16:28.680 fused_ordering(851) 00:16:28.680 fused_ordering(852) 00:16:28.680 fused_ordering(853) 00:16:28.680 fused_ordering(854) 00:16:28.680 fused_ordering(855) 00:16:28.680 fused_ordering(856) 00:16:28.680 fused_ordering(857) 00:16:28.680 fused_ordering(858) 00:16:28.680 fused_ordering(859) 00:16:28.680 fused_ordering(860) 00:16:28.680 fused_ordering(861) 00:16:28.680 fused_ordering(862) 00:16:28.680 fused_ordering(863) 00:16:28.680 fused_ordering(864) 00:16:28.680 fused_ordering(865) 00:16:28.680 fused_ordering(866) 00:16:28.680 fused_ordering(867) 00:16:28.680 fused_ordering(868) 00:16:28.680 fused_ordering(869) 00:16:28.680 fused_ordering(870) 00:16:28.680 fused_ordering(871) 00:16:28.680 fused_ordering(872) 00:16:28.680 fused_ordering(873) 00:16:28.680 fused_ordering(874) 00:16:28.680 fused_ordering(875) 00:16:28.680 fused_ordering(876) 00:16:28.680 fused_ordering(877) 00:16:28.680 fused_ordering(878) 00:16:28.680 fused_ordering(879) 00:16:28.680 fused_ordering(880) 00:16:28.680 fused_ordering(881) 00:16:28.680 fused_ordering(882) 00:16:28.680 fused_ordering(883) 00:16:28.680 fused_ordering(884) 00:16:28.680 fused_ordering(885) 00:16:28.680 fused_ordering(886) 00:16:28.680 fused_ordering(887) 00:16:28.680 fused_ordering(888) 00:16:28.680 fused_ordering(889) 00:16:28.680 fused_ordering(890) 00:16:28.680 fused_ordering(891) 00:16:28.680 fused_ordering(892) 00:16:28.680 fused_ordering(893) 00:16:28.680 fused_ordering(894) 00:16:28.680 fused_ordering(895) 00:16:28.680 fused_ordering(896) 00:16:28.680 fused_ordering(897) 00:16:28.680 fused_ordering(898) 00:16:28.680 fused_ordering(899) 00:16:28.680 fused_ordering(900) 00:16:28.680 fused_ordering(901) 00:16:28.680 fused_ordering(902) 00:16:28.680 fused_ordering(903) 00:16:28.680 fused_ordering(904) 00:16:28.680 fused_ordering(905) 00:16:28.680 fused_ordering(906) 00:16:28.680 fused_ordering(907) 00:16:28.680 fused_ordering(908) 00:16:28.680 fused_ordering(909) 00:16:28.680 fused_ordering(910) 00:16:28.680 fused_ordering(911) 00:16:28.680 fused_ordering(912) 00:16:28.680 fused_ordering(913) 00:16:28.680 fused_ordering(914) 00:16:28.680 fused_ordering(915) 00:16:28.680 fused_ordering(916) 00:16:28.680 fused_ordering(917) 00:16:28.680 fused_ordering(918) 00:16:28.680 fused_ordering(919) 00:16:28.680 fused_ordering(920) 00:16:28.680 fused_ordering(921) 00:16:28.680 fused_ordering(922) 00:16:28.680 fused_ordering(923) 00:16:28.680 fused_ordering(924) 00:16:28.680 fused_ordering(925) 00:16:28.680 fused_ordering(926) 00:16:28.680 fused_ordering(927) 00:16:28.680 fused_ordering(928) 00:16:28.680 fused_ordering(929) 00:16:28.680 fused_ordering(930) 00:16:28.680 fused_ordering(931) 00:16:28.680 fused_ordering(932) 00:16:28.680 fused_ordering(933) 00:16:28.680 fused_ordering(934) 00:16:28.680 fused_ordering(935) 00:16:28.680 fused_ordering(936) 00:16:28.680 fused_ordering(937) 00:16:28.680 fused_ordering(938) 00:16:28.680 fused_ordering(939) 00:16:28.680 fused_ordering(940) 00:16:28.680 fused_ordering(941) 00:16:28.680 fused_ordering(942) 00:16:28.680 fused_ordering(943) 00:16:28.680 fused_ordering(944) 00:16:28.680 fused_ordering(945) 00:16:28.680 fused_ordering(946) 00:16:28.680 fused_ordering(947) 00:16:28.680 fused_ordering(948) 00:16:28.680 fused_ordering(949) 00:16:28.680 fused_ordering(950) 00:16:28.680 fused_ordering(951) 00:16:28.680 fused_ordering(952) 00:16:28.680 fused_ordering(953) 00:16:28.680 fused_ordering(954) 00:16:28.680 fused_ordering(955) 00:16:28.680 fused_ordering(956) 00:16:28.680 fused_ordering(957) 00:16:28.680 fused_ordering(958) 00:16:28.680 fused_ordering(959) 00:16:28.680 fused_ordering(960) 00:16:28.680 fused_ordering(961) 00:16:28.680 fused_ordering(962) 00:16:28.680 fused_ordering(963) 00:16:28.680 fused_ordering(964) 00:16:28.680 fused_ordering(965) 00:16:28.680 fused_ordering(966) 00:16:28.680 fused_ordering(967) 00:16:28.680 fused_ordering(968) 00:16:28.680 fused_ordering(969) 00:16:28.680 fused_ordering(970) 00:16:28.680 fused_ordering(971) 00:16:28.680 fused_ordering(972) 00:16:28.680 fused_ordering(973) 00:16:28.680 fused_ordering(974) 00:16:28.680 fused_ordering(975) 00:16:28.680 fused_ordering(976) 00:16:28.680 fused_ordering(977) 00:16:28.680 fused_ordering(978) 00:16:28.680 fused_ordering(979) 00:16:28.680 fused_ordering(980) 00:16:28.680 fused_ordering(981) 00:16:28.680 fused_ordering(982) 00:16:28.680 fused_ordering(983) 00:16:28.680 fused_ordering(984) 00:16:28.680 fused_ordering(985) 00:16:28.680 fused_ordering(986) 00:16:28.680 fused_ordering(987) 00:16:28.680 fused_ordering(988) 00:16:28.680 fused_ordering(989) 00:16:28.680 fused_ordering(990) 00:16:28.680 fused_ordering(991) 00:16:28.680 fused_ordering(992) 00:16:28.680 fused_ordering(993) 00:16:28.680 fused_ordering(994) 00:16:28.680 fused_ordering(995) 00:16:28.680 fused_ordering(996) 00:16:28.680 fused_ordering(997) 00:16:28.680 fused_ordering(998) 00:16:28.680 fused_ordering(999) 00:16:28.680 fused_ordering(1000) 00:16:28.680 fused_ordering(1001) 00:16:28.680 fused_ordering(1002) 00:16:28.680 fused_ordering(1003) 00:16:28.680 fused_ordering(1004) 00:16:28.680 fused_ordering(1005) 00:16:28.680 fused_ordering(1006) 00:16:28.680 fused_ordering(1007) 00:16:28.680 fused_ordering(1008) 00:16:28.680 fused_ordering(1009) 00:16:28.680 fused_ordering(1010) 00:16:28.680 fused_ordering(1011) 00:16:28.680 fused_ordering(1012) 00:16:28.680 fused_ordering(1013) 00:16:28.680 fused_ordering(1014) 00:16:28.680 fused_ordering(1015) 00:16:28.680 fused_ordering(1016) 00:16:28.680 fused_ordering(1017) 00:16:28.680 fused_ordering(1018) 00:16:28.680 fused_ordering(1019) 00:16:28.680 fused_ordering(1020) 00:16:28.680 fused_ordering(1021) 00:16:28.680 fused_ordering(1022) 00:16:28.680 fused_ordering(1023) 00:16:28.680 23:29:49 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:28.680 23:29:49 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:28.680 23:29:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:28.680 23:29:49 -- nvmf/common.sh@116 -- # sync 00:16:28.680 23:29:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:28.680 23:29:49 -- nvmf/common.sh@119 -- # set +e 00:16:28.680 23:29:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:28.680 23:29:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:28.680 rmmod nvme_tcp 00:16:28.680 rmmod nvme_fabrics 00:16:28.680 rmmod nvme_keyring 00:16:28.680 23:29:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:28.680 23:29:49 -- nvmf/common.sh@123 -- # set -e 00:16:28.680 23:29:49 -- nvmf/common.sh@124 -- # return 0 00:16:28.680 23:29:49 -- nvmf/common.sh@477 -- # '[' -n 220908 ']' 00:16:28.680 23:29:49 -- nvmf/common.sh@478 -- # killprocess 220908 00:16:28.680 23:29:49 -- common/autotest_common.sh@926 -- # '[' -z 220908 ']' 00:16:28.680 23:29:49 -- common/autotest_common.sh@930 -- # kill -0 220908 00:16:28.680 23:29:49 -- common/autotest_common.sh@931 -- # uname 00:16:28.680 23:29:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:28.680 23:29:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 220908 00:16:28.938 23:29:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:28.938 23:29:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:28.938 23:29:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 220908' 00:16:28.938 killing process with pid 220908 00:16:28.938 23:29:49 -- common/autotest_common.sh@945 -- # kill 220908 00:16:28.938 23:29:49 -- common/autotest_common.sh@950 -- # wait 220908 00:16:29.197 23:29:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:29.197 23:29:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:29.197 23:29:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:29.197 23:29:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.197 23:29:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:29.197 23:29:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.197 23:29:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.197 23:29:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.097 23:29:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:31.097 00:16:31.097 real 0m10.258s 00:16:31.097 user 0m7.234s 00:16:31.097 sys 0m5.327s 00:16:31.097 23:29:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:31.097 23:29:51 -- common/autotest_common.sh@10 -- # set +x 00:16:31.097 ************************************ 00:16:31.097 END TEST nvmf_fused_ordering 00:16:31.097 ************************************ 00:16:31.097 23:29:52 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:31.097 23:29:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:31.097 23:29:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:31.097 23:29:52 -- common/autotest_common.sh@10 -- # set +x 00:16:31.097 ************************************ 00:16:31.097 START TEST nvmf_delete_subsystem 00:16:31.097 ************************************ 00:16:31.097 23:29:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:31.356 * Looking for test storage... 00:16:31.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.356 23:29:52 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.356 23:29:52 -- nvmf/common.sh@7 -- # uname -s 00:16:31.356 23:29:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.356 23:29:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.356 23:29:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.356 23:29:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.356 23:29:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.356 23:29:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.356 23:29:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.356 23:29:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.356 23:29:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.356 23:29:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.356 23:29:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:31.356 23:29:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:31.356 23:29:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.356 23:29:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.356 23:29:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.356 23:29:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.356 23:29:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.356 23:29:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.356 23:29:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.356 23:29:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.356 23:29:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.356 23:29:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.356 23:29:52 -- paths/export.sh@5 -- # export PATH 00:16:31.356 23:29:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.356 23:29:52 -- nvmf/common.sh@46 -- # : 0 00:16:31.356 23:29:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:31.356 23:29:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:31.356 23:29:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:31.356 23:29:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.356 23:29:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.356 23:29:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:31.356 23:29:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:31.356 23:29:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:31.356 23:29:52 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:31.356 23:29:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:31.356 23:29:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.356 23:29:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:31.356 23:29:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:31.357 23:29:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:31.357 23:29:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.357 23:29:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.357 23:29:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.357 23:29:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:31.357 23:29:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:31.357 23:29:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:31.357 23:29:52 -- common/autotest_common.sh@10 -- # set +x 00:16:33.891 23:29:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:33.891 23:29:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:33.891 23:29:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:33.891 23:29:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:33.891 23:29:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:33.891 23:29:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:33.891 23:29:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:33.891 23:29:54 -- nvmf/common.sh@294 -- # net_devs=() 00:16:33.891 23:29:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:33.891 23:29:54 -- nvmf/common.sh@295 -- # e810=() 00:16:33.891 23:29:54 -- nvmf/common.sh@295 -- # local -ga e810 00:16:33.891 23:29:54 -- nvmf/common.sh@296 -- # x722=() 00:16:33.891 23:29:54 -- nvmf/common.sh@296 -- # local -ga x722 00:16:33.891 23:29:54 -- nvmf/common.sh@297 -- # mlx=() 00:16:33.891 23:29:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:33.891 23:29:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.891 23:29:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.891 23:29:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.891 23:29:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.891 23:29:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.891 23:29:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.891 23:29:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.891 23:29:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.891 23:29:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.891 23:29:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.891 23:29:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.891 23:29:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:33.891 23:29:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:33.891 23:29:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:33.891 23:29:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:33.891 23:29:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:33.891 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:33.891 23:29:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:33.891 23:29:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:33.891 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:33.891 23:29:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:33.891 23:29:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:33.891 23:29:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.891 23:29:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:33.891 23:29:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.891 23:29:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:33.891 Found net devices under 0000:84:00.0: cvl_0_0 00:16:33.891 23:29:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.891 23:29:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:33.891 23:29:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.891 23:29:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:33.891 23:29:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.891 23:29:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:33.891 Found net devices under 0000:84:00.1: cvl_0_1 00:16:33.891 23:29:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.891 23:29:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:33.891 23:29:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:33.891 23:29:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:33.891 23:29:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:33.891 23:29:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.891 23:29:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.891 23:29:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.891 23:29:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:33.891 23:29:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.892 23:29:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.892 23:29:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:33.892 23:29:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.892 23:29:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.892 23:29:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:33.892 23:29:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:33.892 23:29:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.892 23:29:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.892 23:29:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.892 23:29:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.151 23:29:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:34.151 23:29:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.151 23:29:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.151 23:29:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.151 23:29:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:34.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:16:34.151 00:16:34.151 --- 10.0.0.2 ping statistics --- 00:16:34.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.151 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:16:34.151 23:29:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:16:34.151 00:16:34.151 --- 10.0.0.1 ping statistics --- 00:16:34.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.151 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:16:34.151 23:29:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.151 23:29:54 -- nvmf/common.sh@410 -- # return 0 00:16:34.151 23:29:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:34.151 23:29:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.151 23:29:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:34.151 23:29:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:34.151 23:29:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.151 23:29:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:34.151 23:29:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:34.151 23:29:54 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:34.151 23:29:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:34.151 23:29:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:34.151 23:29:54 -- common/autotest_common.sh@10 -- # set +x 00:16:34.151 23:29:54 -- nvmf/common.sh@469 -- # nvmfpid=223499 00:16:34.151 23:29:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:34.151 23:29:54 -- nvmf/common.sh@470 -- # waitforlisten 223499 00:16:34.151 23:29:54 -- common/autotest_common.sh@819 -- # '[' -z 223499 ']' 00:16:34.151 23:29:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.151 23:29:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:34.151 23:29:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.151 23:29:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:34.151 23:29:54 -- common/autotest_common.sh@10 -- # set +x 00:16:34.151 [2024-07-11 23:29:55.040503] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:34.151 [2024-07-11 23:29:55.040675] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.409 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.409 [2024-07-11 23:29:55.148972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:34.409 [2024-07-11 23:29:55.243594] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:34.409 [2024-07-11 23:29:55.243756] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.410 [2024-07-11 23:29:55.243776] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.410 [2024-07-11 23:29:55.243791] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.410 [2024-07-11 23:29:55.243891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.410 [2024-07-11 23:29:55.243898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.783 23:29:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:35.783 23:29:56 -- common/autotest_common.sh@852 -- # return 0 00:16:35.783 23:29:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:35.783 23:29:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:35.783 23:29:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.783 23:29:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.783 23:29:56 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:35.783 23:29:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.783 23:29:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.783 [2024-07-11 23:29:56.408264] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.783 23:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.783 23:29:56 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:35.783 23:29:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.783 23:29:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.783 23:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.783 23:29:56 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.783 23:29:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.783 23:29:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.783 [2024-07-11 23:29:56.424576] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.783 23:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.783 23:29:56 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:35.783 23:29:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.783 23:29:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.783 NULL1 00:16:35.783 23:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.783 23:29:56 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:35.783 23:29:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.783 23:29:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.783 Delay0 00:16:35.783 23:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.783 23:29:56 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.783 23:29:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.783 23:29:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.783 23:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.783 23:29:56 -- target/delete_subsystem.sh@28 -- # perf_pid=223755 00:16:35.783 23:29:56 -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:35.783 23:29:56 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:35.783 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.784 [2024-07-11 23:29:56.529391] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:37.681 23:29:58 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.681 23:29:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:37.681 23:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 [2024-07-11 23:29:58.742419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fca8400c350 is same with the state(5) to be set 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 starting I/O failed: -6 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Read completed with error (sct=0, sc=8) 00:16:37.939 Write completed with error (sct=0, sc=8) 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Write completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 Read completed with error (sct=0, sc=8) 00:16:37.940 starting I/O failed: -6 00:16:37.940 [2024-07-11 23:29:58.743481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae7570 is same with the state(5) to be set 00:16:38.873 [2024-07-11 23:29:59.708781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacdd70 is same with the state(5) to be set 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 [2024-07-11 23:29:59.743825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fca8400bf20 is same with the state(5) to be set 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 [2024-07-11 23:29:59.744454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fca8400c600 is same with the state(5) to be set 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 [2024-07-11 23:29:59.744885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae7820 is same with the state(5) to be set 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Read completed with error (sct=0, sc=8) 00:16:38.873 Write completed with error (sct=0, sc=8) 00:16:38.873 [2024-07-11 23:29:59.745494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae73f0 is same with the state(5) to be set 00:16:38.873 [2024-07-11 23:29:59.745983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacdd70 (9): Bad file descriptor 00:16:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:38.873 23:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:38.873 23:29:59 -- target/delete_subsystem.sh@34 -- # delay=0 00:16:38.873 23:29:59 -- target/delete_subsystem.sh@35 -- # kill -0 223755 00:16:38.873 23:29:59 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:38.873 Initializing NVMe Controllers 00:16:38.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:38.873 Controller IO queue size 128, less than required. 00:16:38.873 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:38.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:38.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:38.873 Initialization complete. Launching workers. 00:16:38.873 ======================================================== 00:16:38.873 Latency(us) 00:16:38.873 Device Information : IOPS MiB/s Average min max 00:16:38.873 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 188.48 0.09 900201.28 741.96 1013375.98 00:16:38.873 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.12 0.08 895107.23 480.64 1014023.87 00:16:38.873 ======================================================== 00:16:38.873 Total : 358.60 0.18 897784.60 480.64 1014023.87 00:16:38.873 00:16:39.439 23:30:00 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:39.439 23:30:00 -- target/delete_subsystem.sh@35 -- # kill -0 223755 00:16:39.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (223755) - No such process 00:16:39.439 23:30:00 -- target/delete_subsystem.sh@45 -- # NOT wait 223755 00:16:39.439 23:30:00 -- common/autotest_common.sh@640 -- # local es=0 00:16:39.439 23:30:00 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 223755 00:16:39.439 23:30:00 -- common/autotest_common.sh@628 -- # local arg=wait 00:16:39.439 23:30:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:39.439 23:30:00 -- common/autotest_common.sh@632 -- # type -t wait 00:16:39.439 23:30:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:39.440 23:30:00 -- common/autotest_common.sh@643 -- # wait 223755 00:16:39.440 23:30:00 -- common/autotest_common.sh@643 -- # es=1 00:16:39.440 23:30:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:39.440 23:30:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:39.440 23:30:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:39.440 23:30:00 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:39.440 23:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.440 23:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.440 23:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.440 23:30:00 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.440 23:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.440 23:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.440 [2024-07-11 23:30:00.270393] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.440 23:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.440 23:30:00 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:39.440 23:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.440 23:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.440 23:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.440 23:30:00 -- target/delete_subsystem.sh@54 -- # perf_pid=224219 00:16:39.440 23:30:00 -- target/delete_subsystem.sh@56 -- # delay=0 00:16:39.440 23:30:00 -- target/delete_subsystem.sh@57 -- # kill -0 224219 00:16:39.440 23:30:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:39.440 23:30:00 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:39.440 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.440 [2024-07-11 23:30:00.351405] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:40.005 23:30:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:40.005 23:30:00 -- target/delete_subsystem.sh@57 -- # kill -0 224219 00:16:40.005 23:30:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:40.569 23:30:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:40.569 23:30:01 -- target/delete_subsystem.sh@57 -- # kill -0 224219 00:16:40.569 23:30:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:41.133 23:30:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:41.133 23:30:01 -- target/delete_subsystem.sh@57 -- # kill -0 224219 00:16:41.133 23:30:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:41.391 23:30:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:41.391 23:30:02 -- target/delete_subsystem.sh@57 -- # kill -0 224219 00:16:41.391 23:30:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:41.955 23:30:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:41.955 23:30:02 -- target/delete_subsystem.sh@57 -- # kill -0 224219 00:16:41.955 23:30:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:42.523 23:30:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:42.523 23:30:03 -- target/delete_subsystem.sh@57 -- # kill -0 224219 00:16:42.523 23:30:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:42.781 Initializing NVMe Controllers 00:16:42.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:42.781 Controller IO queue size 128, less than required. 00:16:42.781 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:42.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:42.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:42.781 Initialization complete. Launching workers. 00:16:42.781 ======================================================== 00:16:42.781 Latency(us) 00:16:42.781 Device Information : IOPS MiB/s Average min max 00:16:42.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004550.84 1000264.88 1011257.08 00:16:42.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004448.70 1000263.03 1041282.63 00:16:42.781 ======================================================== 00:16:42.781 Total : 256.00 0.12 1004499.77 1000263.03 1041282.63 00:16:42.781 00:16:43.040 23:30:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:43.040 23:30:03 -- target/delete_subsystem.sh@57 -- # kill -0 224219 00:16:43.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (224219) - No such process 00:16:43.040 23:30:03 -- target/delete_subsystem.sh@67 -- # wait 224219 00:16:43.040 23:30:03 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:43.040 23:30:03 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:43.040 23:30:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:43.040 23:30:03 -- nvmf/common.sh@116 -- # sync 00:16:43.040 23:30:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:43.040 23:30:03 -- nvmf/common.sh@119 -- # set +e 00:16:43.040 23:30:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:43.040 23:30:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:43.040 rmmod nvme_tcp 00:16:43.040 rmmod nvme_fabrics 00:16:43.040 rmmod nvme_keyring 00:16:43.040 23:30:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:43.040 23:30:03 -- nvmf/common.sh@123 -- # set -e 00:16:43.040 23:30:03 -- nvmf/common.sh@124 -- # return 0 00:16:43.040 23:30:03 -- nvmf/common.sh@477 -- # '[' -n 223499 ']' 00:16:43.040 23:30:03 -- nvmf/common.sh@478 -- # killprocess 223499 00:16:43.040 23:30:03 -- common/autotest_common.sh@926 -- # '[' -z 223499 ']' 00:16:43.040 23:30:03 -- common/autotest_common.sh@930 -- # kill -0 223499 00:16:43.040 23:30:03 -- common/autotest_common.sh@931 -- # uname 00:16:43.040 23:30:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:43.040 23:30:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 223499 00:16:43.040 23:30:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:43.040 23:30:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:43.040 23:30:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 223499' 00:16:43.040 killing process with pid 223499 00:16:43.040 23:30:03 -- common/autotest_common.sh@945 -- # kill 223499 00:16:43.040 23:30:03 -- common/autotest_common.sh@950 -- # wait 223499 00:16:43.299 23:30:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:43.299 23:30:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:43.299 23:30:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:43.299 23:30:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.299 23:30:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:43.299 23:30:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.299 23:30:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.299 23:30:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.238 23:30:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:45.238 00:16:45.238 real 0m14.147s 00:16:45.238 user 0m30.851s 00:16:45.238 sys 0m3.762s 00:16:45.238 23:30:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.238 23:30:06 -- common/autotest_common.sh@10 -- # set +x 00:16:45.238 ************************************ 00:16:45.238 END TEST nvmf_delete_subsystem 00:16:45.238 ************************************ 00:16:45.497 23:30:06 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:16:45.497 23:30:06 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:45.497 23:30:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:45.497 23:30:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:45.497 23:30:06 -- common/autotest_common.sh@10 -- # set +x 00:16:45.497 ************************************ 00:16:45.497 START TEST nvmf_nvme_cli 00:16:45.497 ************************************ 00:16:45.497 23:30:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:45.497 * Looking for test storage... 00:16:45.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.497 23:30:06 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.497 23:30:06 -- nvmf/common.sh@7 -- # uname -s 00:16:45.497 23:30:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.497 23:30:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.497 23:30:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.497 23:30:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.497 23:30:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.497 23:30:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.497 23:30:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.497 23:30:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.497 23:30:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.497 23:30:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.497 23:30:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:45.497 23:30:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:45.497 23:30:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.497 23:30:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.497 23:30:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.497 23:30:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.497 23:30:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.497 23:30:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.497 23:30:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.497 23:30:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.497 23:30:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.497 23:30:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.497 23:30:06 -- paths/export.sh@5 -- # export PATH 00:16:45.497 23:30:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.497 23:30:06 -- nvmf/common.sh@46 -- # : 0 00:16:45.497 23:30:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:45.497 23:30:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:45.497 23:30:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:45.497 23:30:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.497 23:30:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.497 23:30:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:45.497 23:30:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:45.497 23:30:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:45.497 23:30:06 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:45.497 23:30:06 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:45.497 23:30:06 -- target/nvme_cli.sh@14 -- # devs=() 00:16:45.497 23:30:06 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:45.497 23:30:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:45.497 23:30:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.497 23:30:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:45.497 23:30:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:45.497 23:30:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:45.497 23:30:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.497 23:30:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.497 23:30:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.497 23:30:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:45.497 23:30:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:45.497 23:30:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:45.497 23:30:06 -- common/autotest_common.sh@10 -- # set +x 00:16:48.030 23:30:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:48.030 23:30:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:48.030 23:30:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:48.030 23:30:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:48.030 23:30:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:48.030 23:30:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:48.030 23:30:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:48.030 23:30:08 -- nvmf/common.sh@294 -- # net_devs=() 00:16:48.030 23:30:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:48.030 23:30:08 -- nvmf/common.sh@295 -- # e810=() 00:16:48.030 23:30:08 -- nvmf/common.sh@295 -- # local -ga e810 00:16:48.030 23:30:08 -- nvmf/common.sh@296 -- # x722=() 00:16:48.030 23:30:08 -- nvmf/common.sh@296 -- # local -ga x722 00:16:48.030 23:30:08 -- nvmf/common.sh@297 -- # mlx=() 00:16:48.030 23:30:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:48.030 23:30:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.030 23:30:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.030 23:30:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.030 23:30:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.030 23:30:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.030 23:30:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.031 23:30:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.031 23:30:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.031 23:30:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.031 23:30:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.031 23:30:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.031 23:30:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:48.031 23:30:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:48.031 23:30:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:48.031 23:30:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:48.031 23:30:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:48.031 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:48.031 23:30:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:48.031 23:30:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:48.031 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:48.031 23:30:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:48.031 23:30:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:48.031 23:30:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.031 23:30:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:48.031 23:30:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.031 23:30:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:48.031 Found net devices under 0000:84:00.0: cvl_0_0 00:16:48.031 23:30:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.031 23:30:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:48.031 23:30:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.031 23:30:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:48.031 23:30:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.031 23:30:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:48.031 Found net devices under 0000:84:00.1: cvl_0_1 00:16:48.031 23:30:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.031 23:30:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:48.031 23:30:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:48.031 23:30:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:48.031 23:30:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.031 23:30:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.031 23:30:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.031 23:30:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:48.031 23:30:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.031 23:30:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.031 23:30:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:48.031 23:30:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.031 23:30:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.031 23:30:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:48.031 23:30:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:48.031 23:30:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.031 23:30:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.031 23:30:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.031 23:30:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.031 23:30:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:48.031 23:30:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.031 23:30:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.031 23:30:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.031 23:30:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:48.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:16:48.031 00:16:48.031 --- 10.0.0.2 ping statistics --- 00:16:48.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.031 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:16:48.031 23:30:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:16:48.031 00:16:48.031 --- 10.0.0.1 ping statistics --- 00:16:48.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.031 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:16:48.031 23:30:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.031 23:30:08 -- nvmf/common.sh@410 -- # return 0 00:16:48.031 23:30:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:48.031 23:30:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.031 23:30:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:48.031 23:30:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.031 23:30:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:48.031 23:30:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:48.031 23:30:08 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:48.031 23:30:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:48.031 23:30:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:48.031 23:30:08 -- common/autotest_common.sh@10 -- # set +x 00:16:48.031 23:30:08 -- nvmf/common.sh@469 -- # nvmfpid=227005 00:16:48.031 23:30:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:48.031 23:30:08 -- nvmf/common.sh@470 -- # waitforlisten 227005 00:16:48.031 23:30:08 -- common/autotest_common.sh@819 -- # '[' -z 227005 ']' 00:16:48.031 23:30:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.031 23:30:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:48.031 23:30:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.031 23:30:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:48.031 23:30:08 -- common/autotest_common.sh@10 -- # set +x 00:16:48.031 [2024-07-11 23:30:08.952480] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:48.031 [2024-07-11 23:30:08.952580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.289 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.289 [2024-07-11 23:30:09.033498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:48.289 [2024-07-11 23:30:09.134191] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:48.289 [2024-07-11 23:30:09.134339] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.289 [2024-07-11 23:30:09.134359] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.289 [2024-07-11 23:30:09.134373] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.289 [2024-07-11 23:30:09.134440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.289 [2024-07-11 23:30:09.134495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.289 [2024-07-11 23:30:09.134522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:48.289 [2024-07-11 23:30:09.134525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.547 23:30:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:48.547 23:30:09 -- common/autotest_common.sh@852 -- # return 0 00:16:48.547 23:30:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:48.547 23:30:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:48.547 23:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:48.547 23:30:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.547 23:30:09 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:48.547 23:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.547 23:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:48.547 [2024-07-11 23:30:09.396892] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.547 23:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.547 23:30:09 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:48.547 23:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.547 23:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:48.547 Malloc0 00:16:48.547 23:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.547 23:30:09 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:48.547 23:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.547 23:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:48.547 Malloc1 00:16:48.547 23:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.547 23:30:09 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:48.547 23:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.547 23:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:48.547 23:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.547 23:30:09 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:48.547 23:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.547 23:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:48.547 23:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.547 23:30:09 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:48.547 23:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.547 23:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:48.547 23:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.547 23:30:09 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.547 23:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.547 23:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:48.547 [2024-07-11 23:30:09.482404] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.547 23:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.547 23:30:09 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:48.547 23:30:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.547 23:30:09 -- common/autotest_common.sh@10 -- # set +x 00:16:48.547 23:30:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.547 23:30:09 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:16:48.805 00:16:48.805 Discovery Log Number of Records 2, Generation counter 2 00:16:48.805 =====Discovery Log Entry 0====== 00:16:48.805 trtype: tcp 00:16:48.805 adrfam: ipv4 00:16:48.805 subtype: current discovery subsystem 00:16:48.805 treq: not required 00:16:48.805 portid: 0 00:16:48.805 trsvcid: 4420 00:16:48.805 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:48.805 traddr: 10.0.0.2 00:16:48.805 eflags: explicit discovery connections, duplicate discovery information 00:16:48.805 sectype: none 00:16:48.805 =====Discovery Log Entry 1====== 00:16:48.805 trtype: tcp 00:16:48.805 adrfam: ipv4 00:16:48.805 subtype: nvme subsystem 00:16:48.805 treq: not required 00:16:48.805 portid: 0 00:16:48.805 trsvcid: 4420 00:16:48.805 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:48.805 traddr: 10.0.0.2 00:16:48.805 eflags: none 00:16:48.805 sectype: none 00:16:48.805 23:30:09 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:48.805 23:30:09 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:48.805 23:30:09 -- nvmf/common.sh@510 -- # local dev _ 00:16:48.805 23:30:09 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:48.805 23:30:09 -- nvmf/common.sh@509 -- # nvme list 00:16:48.805 23:30:09 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:48.805 23:30:09 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:48.805 23:30:09 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:48.805 23:30:09 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:48.805 23:30:09 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:48.805 23:30:09 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.370 23:30:10 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:49.370 23:30:10 -- common/autotest_common.sh@1177 -- # local i=0 00:16:49.370 23:30:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.370 23:30:10 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:16:49.370 23:30:10 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:16:49.370 23:30:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:51.269 23:30:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:51.269 23:30:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:51.269 23:30:12 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:51.269 23:30:12 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:16:51.269 23:30:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.269 23:30:12 -- common/autotest_common.sh@1187 -- # return 0 00:16:51.269 23:30:12 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:51.269 23:30:12 -- nvmf/common.sh@510 -- # local dev _ 00:16:51.269 23:30:12 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:51.269 23:30:12 -- nvmf/common.sh@509 -- # nvme list 00:16:51.269 23:30:12 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:51.269 23:30:12 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:51.269 23:30:12 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:51.269 23:30:12 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:51.269 23:30:12 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:51.269 23:30:12 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:51.269 23:30:12 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:51.269 23:30:12 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:51.269 23:30:12 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:51.269 23:30:12 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:51.269 23:30:12 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:51.269 /dev/nvme0n1 ]] 00:16:51.269 23:30:12 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:51.269 23:30:12 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:51.269 23:30:12 -- nvmf/common.sh@510 -- # local dev _ 00:16:51.527 23:30:12 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:51.527 23:30:12 -- nvmf/common.sh@509 -- # nvme list 00:16:51.527 23:30:12 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:51.527 23:30:12 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:51.527 23:30:12 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:51.527 23:30:12 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:51.527 23:30:12 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:51.527 23:30:12 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:51.527 23:30:12 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:51.527 23:30:12 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:51.527 23:30:12 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:51.527 23:30:12 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:51.527 23:30:12 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:51.527 23:30:12 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:51.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.527 23:30:12 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:51.527 23:30:12 -- common/autotest_common.sh@1198 -- # local i=0 00:16:51.527 23:30:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:51.527 23:30:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.527 23:30:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:51.527 23:30:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.527 23:30:12 -- common/autotest_common.sh@1210 -- # return 0 00:16:51.527 23:30:12 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:51.527 23:30:12 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:51.527 23:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.527 23:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:51.527 23:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.527 23:30:12 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:51.527 23:30:12 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:51.527 23:30:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:51.527 23:30:12 -- nvmf/common.sh@116 -- # sync 00:16:51.527 23:30:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:51.527 23:30:12 -- nvmf/common.sh@119 -- # set +e 00:16:51.527 23:30:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:51.527 23:30:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:51.527 rmmod nvme_tcp 00:16:51.527 rmmod nvme_fabrics 00:16:51.527 rmmod nvme_keyring 00:16:51.527 23:30:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:51.527 23:30:12 -- nvmf/common.sh@123 -- # set -e 00:16:51.527 23:30:12 -- nvmf/common.sh@124 -- # return 0 00:16:51.527 23:30:12 -- nvmf/common.sh@477 -- # '[' -n 227005 ']' 00:16:51.527 23:30:12 -- nvmf/common.sh@478 -- # killprocess 227005 00:16:51.527 23:30:12 -- common/autotest_common.sh@926 -- # '[' -z 227005 ']' 00:16:51.527 23:30:12 -- common/autotest_common.sh@930 -- # kill -0 227005 00:16:51.527 23:30:12 -- common/autotest_common.sh@931 -- # uname 00:16:51.527 23:30:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:51.527 23:30:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 227005 00:16:51.527 23:30:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:51.527 23:30:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:51.527 23:30:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 227005' 00:16:51.527 killing process with pid 227005 00:16:51.527 23:30:12 -- common/autotest_common.sh@945 -- # kill 227005 00:16:51.527 23:30:12 -- common/autotest_common.sh@950 -- # wait 227005 00:16:52.095 23:30:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:52.095 23:30:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:52.095 23:30:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:52.095 23:30:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:52.095 23:30:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:52.095 23:30:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.095 23:30:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.095 23:30:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.001 23:30:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:54.001 00:16:54.001 real 0m8.611s 00:16:54.001 user 0m15.140s 00:16:54.001 sys 0m2.663s 00:16:54.001 23:30:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.001 23:30:14 -- common/autotest_common.sh@10 -- # set +x 00:16:54.001 ************************************ 00:16:54.001 END TEST nvmf_nvme_cli 00:16:54.001 ************************************ 00:16:54.001 23:30:14 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:16:54.001 23:30:14 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:54.001 23:30:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:54.001 23:30:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:54.001 23:30:14 -- common/autotest_common.sh@10 -- # set +x 00:16:54.001 ************************************ 00:16:54.001 START TEST nvmf_vfio_user 00:16:54.001 ************************************ 00:16:54.001 23:30:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:54.001 * Looking for test storage... 00:16:54.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:54.001 23:30:14 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.001 23:30:14 -- nvmf/common.sh@7 -- # uname -s 00:16:54.001 23:30:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.001 23:30:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.001 23:30:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.001 23:30:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.001 23:30:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.001 23:30:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.001 23:30:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.001 23:30:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.001 23:30:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.001 23:30:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.001 23:30:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:54.001 23:30:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:54.001 23:30:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.001 23:30:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.001 23:30:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.001 23:30:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.001 23:30:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.001 23:30:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.001 23:30:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.001 23:30:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.001 23:30:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.002 23:30:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.002 23:30:14 -- paths/export.sh@5 -- # export PATH 00:16:54.002 23:30:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.002 23:30:14 -- nvmf/common.sh@46 -- # : 0 00:16:54.002 23:30:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:54.002 23:30:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:54.002 23:30:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:54.002 23:30:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.002 23:30:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.002 23:30:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:54.002 23:30:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:54.002 23:30:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=228023 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 228023' 00:16:54.002 Process pid: 228023 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:54.002 23:30:14 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 228023 00:16:54.002 23:30:14 -- common/autotest_common.sh@819 -- # '[' -z 228023 ']' 00:16:54.002 23:30:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.002 23:30:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:54.002 23:30:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.002 23:30:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:54.002 23:30:14 -- common/autotest_common.sh@10 -- # set +x 00:16:54.260 [2024-07-11 23:30:14.981663] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:54.260 [2024-07-11 23:30:14.981756] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.260 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.260 [2024-07-11 23:30:15.057223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.260 [2024-07-11 23:30:15.151383] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:54.260 [2024-07-11 23:30:15.151555] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.260 [2024-07-11 23:30:15.151574] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.260 [2024-07-11 23:30:15.151589] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.260 [2024-07-11 23:30:15.151671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.260 [2024-07-11 23:30:15.151701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.260 [2024-07-11 23:30:15.151757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.260 [2024-07-11 23:30:15.151760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.189 23:30:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:55.189 23:30:16 -- common/autotest_common.sh@852 -- # return 0 00:16:55.189 23:30:16 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:56.558 23:30:17 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:56.816 23:30:17 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:56.816 23:30:17 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:56.816 23:30:17 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:56.816 23:30:17 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:56.816 23:30:17 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:57.073 Malloc1 00:16:57.073 23:30:17 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:57.330 23:30:18 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:57.587 23:30:18 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:57.843 23:30:18 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:57.843 23:30:18 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:57.843 23:30:18 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:58.408 Malloc2 00:16:58.408 23:30:19 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:58.972 23:30:19 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:59.230 23:30:20 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:59.795 23:30:20 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:59.795 23:30:20 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:59.795 23:30:20 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:59.795 23:30:20 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:59.795 23:30:20 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:59.795 23:30:20 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:59.795 [2024-07-11 23:30:20.506991] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:59.795 [2024-07-11 23:30:20.507091] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228700 ] 00:16:59.795 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.795 [2024-07-11 23:30:20.558480] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:59.795 [2024-07-11 23:30:20.567582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:59.795 [2024-07-11 23:30:20.567612] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0cdc72e000 00:16:59.795 [2024-07-11 23:30:20.568573] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:59.795 [2024-07-11 23:30:20.569572] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:59.795 [2024-07-11 23:30:20.570574] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:59.795 [2024-07-11 23:30:20.571586] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:59.795 [2024-07-11 23:30:20.572585] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:59.795 [2024-07-11 23:30:20.573588] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:59.795 [2024-07-11 23:30:20.574596] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:59.795 [2024-07-11 23:30:20.575605] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:59.795 [2024-07-11 23:30:20.576612] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:59.795 [2024-07-11 23:30:20.576632] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0cdb4e2000 00:16:59.795 [2024-07-11 23:30:20.577751] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:59.795 [2024-07-11 23:30:20.592774] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:59.795 [2024-07-11 23:30:20.592816] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:59.795 [2024-07-11 23:30:20.597749] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:59.795 [2024-07-11 23:30:20.597809] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:59.795 [2024-07-11 23:30:20.597912] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:59.795 [2024-07-11 23:30:20.597950] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:59.795 [2024-07-11 23:30:20.597960] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:59.795 [2024-07-11 23:30:20.598737] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:59.795 [2024-07-11 23:30:20.598759] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:59.795 [2024-07-11 23:30:20.598771] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:59.795 [2024-07-11 23:30:20.599746] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:59.795 [2024-07-11 23:30:20.599764] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:59.795 [2024-07-11 23:30:20.599778] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:59.795 [2024-07-11 23:30:20.603154] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:59.795 [2024-07-11 23:30:20.603174] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:59.795 [2024-07-11 23:30:20.603764] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:59.795 [2024-07-11 23:30:20.603786] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:59.795 [2024-07-11 23:30:20.603796] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:59.795 [2024-07-11 23:30:20.603807] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:59.795 [2024-07-11 23:30:20.603917] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:59.795 [2024-07-11 23:30:20.603925] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:59.795 [2024-07-11 23:30:20.603933] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:59.795 [2024-07-11 23:30:20.604770] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:59.795 [2024-07-11 23:30:20.605774] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:59.795 [2024-07-11 23:30:20.606785] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:59.796 [2024-07-11 23:30:20.607829] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:59.796 [2024-07-11 23:30:20.608795] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:59.796 [2024-07-11 23:30:20.608812] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:59.796 [2024-07-11 23:30:20.608821] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.608845] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:59.796 [2024-07-11 23:30:20.608858] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.608885] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:59.796 [2024-07-11 23:30:20.608895] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:59.796 [2024-07-11 23:30:20.608918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:59.796 [2024-07-11 23:30:20.609009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:59.796 [2024-07-11 23:30:20.609027] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:59.796 [2024-07-11 23:30:20.609036] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:59.796 [2024-07-11 23:30:20.609043] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:59.796 [2024-07-11 23:30:20.609051] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:59.796 [2024-07-11 23:30:20.609058] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:59.796 [2024-07-11 23:30:20.609066] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:59.796 [2024-07-11 23:30:20.609078] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609096] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:59.796 [2024-07-11 23:30:20.609151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:59.796 [2024-07-11 23:30:20.609177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.796 [2024-07-11 23:30:20.609190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.796 [2024-07-11 23:30:20.609202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.796 [2024-07-11 23:30:20.609214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.796 [2024-07-11 23:30:20.609222] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609238] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:59.796 [2024-07-11 23:30:20.609264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:59.796 [2024-07-11 23:30:20.609276] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:59.796 [2024-07-11 23:30:20.609284] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609295] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609310] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:59.796 [2024-07-11 23:30:20.609335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:59.796 [2024-07-11 23:30:20.609399] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609414] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609427] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:59.796 [2024-07-11 23:30:20.609450] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:59.796 [2024-07-11 23:30:20.609459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:59.796 [2024-07-11 23:30:20.609475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:59.796 [2024-07-11 23:30:20.609501] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:59.796 [2024-07-11 23:30:20.609520] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609535] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609547] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:59.796 [2024-07-11 23:30:20.609554] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:59.796 [2024-07-11 23:30:20.609563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:59.796 [2024-07-11 23:30:20.609588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:59.796 [2024-07-11 23:30:20.609612] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609626] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609637] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:59.796 [2024-07-11 23:30:20.609645] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:59.796 [2024-07-11 23:30:20.609654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:59.796 [2024-07-11 23:30:20.609665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:59.796 [2024-07-11 23:30:20.609680] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609690] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609705] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609716] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609725] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609734] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:59.796 [2024-07-11 23:30:20.609741] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:59.796 [2024-07-11 23:30:20.609750] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:59.796 [2024-07-11 23:30:20.609777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:59.796 [2024-07-11 23:30:20.609796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:59.796 [2024-07-11 23:30:20.609814] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:59.796 [2024-07-11 23:30:20.609826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:59.796 [2024-07-11 23:30:20.609841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:59.796 [2024-07-11 23:30:20.609855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:59.796 [2024-07-11 23:30:20.609871] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:59.796 [2024-07-11 23:30:20.609882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:59.796 [2024-07-11 23:30:20.609901] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:59.796 [2024-07-11 23:30:20.609909] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:59.796 [2024-07-11 23:30:20.609915] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:59.796 [2024-07-11 23:30:20.609921] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:59.796 [2024-07-11 23:30:20.609930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:59.796 [2024-07-11 23:30:20.609941] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:59.796 [2024-07-11 23:30:20.609949] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:59.797 [2024-07-11 23:30:20.609958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:59.797 [2024-07-11 23:30:20.609968] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:59.797 [2024-07-11 23:30:20.609976] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:59.797 [2024-07-11 23:30:20.609984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:59.797 [2024-07-11 23:30:20.609996] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:59.797 [2024-07-11 23:30:20.610003] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:59.797 [2024-07-11 23:30:20.610012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:59.797 [2024-07-11 23:30:20.610023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:59.797 [2024-07-11 23:30:20.610043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:59.797 [2024-07-11 23:30:20.610058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:59.797 [2024-07-11 23:30:20.610069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:59.797 ===================================================== 00:16:59.797 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:59.797 ===================================================== 00:16:59.797 Controller Capabilities/Features 00:16:59.797 ================================ 00:16:59.797 Vendor ID: 4e58 00:16:59.797 Subsystem Vendor ID: 4e58 00:16:59.797 Serial Number: SPDK1 00:16:59.797 Model Number: SPDK bdev Controller 00:16:59.797 Firmware Version: 24.01.1 00:16:59.797 Recommended Arb Burst: 6 00:16:59.797 IEEE OUI Identifier: 8d 6b 50 00:16:59.797 Multi-path I/O 00:16:59.797 May have multiple subsystem ports: Yes 00:16:59.797 May have multiple controllers: Yes 00:16:59.797 Associated with SR-IOV VF: No 00:16:59.797 Max Data Transfer Size: 131072 00:16:59.797 Max Number of Namespaces: 32 00:16:59.797 Max Number of I/O Queues: 127 00:16:59.797 NVMe Specification Version (VS): 1.3 00:16:59.797 NVMe Specification Version (Identify): 1.3 00:16:59.797 Maximum Queue Entries: 256 00:16:59.797 Contiguous Queues Required: Yes 00:16:59.797 Arbitration Mechanisms Supported 00:16:59.797 Weighted Round Robin: Not Supported 00:16:59.797 Vendor Specific: Not Supported 00:16:59.797 Reset Timeout: 15000 ms 00:16:59.797 Doorbell Stride: 4 bytes 00:16:59.797 NVM Subsystem Reset: Not Supported 00:16:59.797 Command Sets Supported 00:16:59.797 NVM Command Set: Supported 00:16:59.797 Boot Partition: Not Supported 00:16:59.797 Memory Page Size Minimum: 4096 bytes 00:16:59.797 Memory Page Size Maximum: 4096 bytes 00:16:59.797 Persistent Memory Region: Not Supported 00:16:59.797 Optional Asynchronous Events Supported 00:16:59.797 Namespace Attribute Notices: Supported 00:16:59.797 Firmware Activation Notices: Not Supported 00:16:59.797 ANA Change Notices: Not Supported 00:16:59.797 PLE Aggregate Log Change Notices: Not Supported 00:16:59.797 LBA Status Info Alert Notices: Not Supported 00:16:59.797 EGE Aggregate Log Change Notices: Not Supported 00:16:59.797 Normal NVM Subsystem Shutdown event: Not Supported 00:16:59.797 Zone Descriptor Change Notices: Not Supported 00:16:59.797 Discovery Log Change Notices: Not Supported 00:16:59.797 Controller Attributes 00:16:59.797 128-bit Host Identifier: Supported 00:16:59.797 Non-Operational Permissive Mode: Not Supported 00:16:59.797 NVM Sets: Not Supported 00:16:59.797 Read Recovery Levels: Not Supported 00:16:59.797 Endurance Groups: Not Supported 00:16:59.797 Predictable Latency Mode: Not Supported 00:16:59.797 Traffic Based Keep ALive: Not Supported 00:16:59.797 Namespace Granularity: Not Supported 00:16:59.797 SQ Associations: Not Supported 00:16:59.797 UUID List: Not Supported 00:16:59.797 Multi-Domain Subsystem: Not Supported 00:16:59.797 Fixed Capacity Management: Not Supported 00:16:59.797 Variable Capacity Management: Not Supported 00:16:59.797 Delete Endurance Group: Not Supported 00:16:59.797 Delete NVM Set: Not Supported 00:16:59.797 Extended LBA Formats Supported: Not Supported 00:16:59.797 Flexible Data Placement Supported: Not Supported 00:16:59.797 00:16:59.797 Controller Memory Buffer Support 00:16:59.797 ================================ 00:16:59.797 Supported: No 00:16:59.797 00:16:59.797 Persistent Memory Region Support 00:16:59.797 ================================ 00:16:59.797 Supported: No 00:16:59.797 00:16:59.797 Admin Command Set Attributes 00:16:59.797 ============================ 00:16:59.797 Security Send/Receive: Not Supported 00:16:59.797 Format NVM: Not Supported 00:16:59.797 Firmware Activate/Download: Not Supported 00:16:59.797 Namespace Management: Not Supported 00:16:59.797 Device Self-Test: Not Supported 00:16:59.797 Directives: Not Supported 00:16:59.797 NVMe-MI: Not Supported 00:16:59.797 Virtualization Management: Not Supported 00:16:59.797 Doorbell Buffer Config: Not Supported 00:16:59.797 Get LBA Status Capability: Not Supported 00:16:59.797 Command & Feature Lockdown Capability: Not Supported 00:16:59.797 Abort Command Limit: 4 00:16:59.797 Async Event Request Limit: 4 00:16:59.797 Number of Firmware Slots: N/A 00:16:59.797 Firmware Slot 1 Read-Only: N/A 00:16:59.797 Firmware Activation Without Reset: N/A 00:16:59.797 Multiple Update Detection Support: N/A 00:16:59.797 Firmware Update Granularity: No Information Provided 00:16:59.797 Per-Namespace SMART Log: No 00:16:59.797 Asymmetric Namespace Access Log Page: Not Supported 00:16:59.797 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:59.797 Command Effects Log Page: Supported 00:16:59.797 Get Log Page Extended Data: Supported 00:16:59.797 Telemetry Log Pages: Not Supported 00:16:59.797 Persistent Event Log Pages: Not Supported 00:16:59.797 Supported Log Pages Log Page: May Support 00:16:59.797 Commands Supported & Effects Log Page: Not Supported 00:16:59.797 Feature Identifiers & Effects Log Page:May Support 00:16:59.797 NVMe-MI Commands & Effects Log Page: May Support 00:16:59.797 Data Area 4 for Telemetry Log: Not Supported 00:16:59.797 Error Log Page Entries Supported: 128 00:16:59.797 Keep Alive: Supported 00:16:59.797 Keep Alive Granularity: 10000 ms 00:16:59.797 00:16:59.797 NVM Command Set Attributes 00:16:59.797 ========================== 00:16:59.797 Submission Queue Entry Size 00:16:59.797 Max: 64 00:16:59.797 Min: 64 00:16:59.797 Completion Queue Entry Size 00:16:59.797 Max: 16 00:16:59.797 Min: 16 00:16:59.797 Number of Namespaces: 32 00:16:59.797 Compare Command: Supported 00:16:59.797 Write Uncorrectable Command: Not Supported 00:16:59.797 Dataset Management Command: Supported 00:16:59.797 Write Zeroes Command: Supported 00:16:59.797 Set Features Save Field: Not Supported 00:16:59.797 Reservations: Not Supported 00:16:59.797 Timestamp: Not Supported 00:16:59.797 Copy: Supported 00:16:59.797 Volatile Write Cache: Present 00:16:59.797 Atomic Write Unit (Normal): 1 00:16:59.797 Atomic Write Unit (PFail): 1 00:16:59.797 Atomic Compare & Write Unit: 1 00:16:59.797 Fused Compare & Write: Supported 00:16:59.797 Scatter-Gather List 00:16:59.797 SGL Command Set: Supported (Dword aligned) 00:16:59.797 SGL Keyed: Not Supported 00:16:59.797 SGL Bit Bucket Descriptor: Not Supported 00:16:59.797 SGL Metadata Pointer: Not Supported 00:16:59.797 Oversized SGL: Not Supported 00:16:59.797 SGL Metadata Address: Not Supported 00:16:59.797 SGL Offset: Not Supported 00:16:59.797 Transport SGL Data Block: Not Supported 00:16:59.797 Replay Protected Memory Block: Not Supported 00:16:59.797 00:16:59.797 Firmware Slot Information 00:16:59.797 ========================= 00:16:59.797 Active slot: 1 00:16:59.797 Slot 1 Firmware Revision: 24.01.1 00:16:59.797 00:16:59.797 00:16:59.797 Commands Supported and Effects 00:16:59.797 ============================== 00:16:59.797 Admin Commands 00:16:59.797 -------------- 00:16:59.797 Get Log Page (02h): Supported 00:16:59.797 Identify (06h): Supported 00:16:59.797 Abort (08h): Supported 00:16:59.797 Set Features (09h): Supported 00:16:59.797 Get Features (0Ah): Supported 00:16:59.797 Asynchronous Event Request (0Ch): Supported 00:16:59.797 Keep Alive (18h): Supported 00:16:59.797 I/O Commands 00:16:59.797 ------------ 00:16:59.797 Flush (00h): Supported LBA-Change 00:16:59.797 Write (01h): Supported LBA-Change 00:16:59.797 Read (02h): Supported 00:16:59.797 Compare (05h): Supported 00:16:59.797 Write Zeroes (08h): Supported LBA-Change 00:16:59.797 Dataset Management (09h): Supported LBA-Change 00:16:59.797 Copy (19h): Supported LBA-Change 00:16:59.797 Unknown (79h): Supported LBA-Change 00:16:59.797 Unknown (7Ah): Supported 00:16:59.797 00:16:59.797 Error Log 00:16:59.797 ========= 00:16:59.797 00:16:59.797 Arbitration 00:16:59.797 =========== 00:16:59.797 Arbitration Burst: 1 00:16:59.797 00:16:59.797 Power Management 00:16:59.797 ================ 00:16:59.797 Number of Power States: 1 00:16:59.798 Current Power State: Power State #0 00:16:59.798 Power State #0: 00:16:59.798 Max Power: 0.00 W 00:16:59.798 Non-Operational State: Operational 00:16:59.798 Entry Latency: Not Reported 00:16:59.798 Exit Latency: Not Reported 00:16:59.798 Relative Read Throughput: 0 00:16:59.798 Relative Read Latency: 0 00:16:59.798 Relative Write Throughput: 0 00:16:59.798 Relative Write Latency: 0 00:16:59.798 Idle Power: Not Reported 00:16:59.798 Active Power: Not Reported 00:16:59.798 Non-Operational Permissive Mode: Not Supported 00:16:59.798 00:16:59.798 Health Information 00:16:59.798 ================== 00:16:59.798 Critical Warnings: 00:16:59.798 Available Spare Space: OK 00:16:59.798 Temperature: OK 00:16:59.798 Device Reliability: OK 00:16:59.798 Read Only: No 00:16:59.798 Volatile Memory Backup: OK 00:16:59.798 Current Temperature: 0 Kelvin[2024-07-11 23:30:20.610233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:59.798 [2024-07-11 23:30:20.610265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:59.798 [2024-07-11 23:30:20.610310] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:59.798 [2024-07-11 23:30:20.610330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.798 [2024-07-11 23:30:20.610342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.798 [2024-07-11 23:30:20.610352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.798 [2024-07-11 23:30:20.610362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.798 [2024-07-11 23:30:20.610807] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:59.798 [2024-07-11 23:30:20.610830] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:59.798 [2024-07-11 23:30:20.611847] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:59.798 [2024-07-11 23:30:20.611860] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:59.798 [2024-07-11 23:30:20.612814] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:59.798 [2024-07-11 23:30:20.612839] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:59.798 [2024-07-11 23:30:20.612896] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:59.798 [2024-07-11 23:30:20.616153] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:59.798 (-273 Celsius) 00:16:59.798 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:59.798 Available Spare: 0% 00:16:59.798 Available Spare Threshold: 0% 00:16:59.798 Life Percentage Used: 0% 00:16:59.798 Data Units Read: 0 00:16:59.798 Data Units Written: 0 00:16:59.798 Host Read Commands: 0 00:16:59.798 Host Write Commands: 0 00:16:59.798 Controller Busy Time: 0 minutes 00:16:59.798 Power Cycles: 0 00:16:59.798 Power On Hours: 0 hours 00:16:59.798 Unsafe Shutdowns: 0 00:16:59.798 Unrecoverable Media Errors: 0 00:16:59.798 Lifetime Error Log Entries: 0 00:16:59.798 Warning Temperature Time: 0 minutes 00:16:59.798 Critical Temperature Time: 0 minutes 00:16:59.798 00:16:59.798 Number of Queues 00:16:59.798 ================ 00:16:59.798 Number of I/O Submission Queues: 127 00:16:59.798 Number of I/O Completion Queues: 127 00:16:59.798 00:16:59.798 Active Namespaces 00:16:59.798 ================= 00:16:59.798 Namespace ID:1 00:16:59.798 Error Recovery Timeout: Unlimited 00:16:59.798 Command Set Identifier: NVM (00h) 00:16:59.798 Deallocate: Supported 00:16:59.798 Deallocated/Unwritten Error: Not Supported 00:16:59.798 Deallocated Read Value: Unknown 00:16:59.798 Deallocate in Write Zeroes: Not Supported 00:16:59.798 Deallocated Guard Field: 0xFFFF 00:16:59.798 Flush: Supported 00:16:59.798 Reservation: Supported 00:16:59.798 Namespace Sharing Capabilities: Multiple Controllers 00:16:59.798 Size (in LBAs): 131072 (0GiB) 00:16:59.798 Capacity (in LBAs): 131072 (0GiB) 00:16:59.798 Utilization (in LBAs): 131072 (0GiB) 00:16:59.798 NGUID: A4C9D06C14E94646A23AC73A9B53A64D 00:16:59.798 UUID: a4c9d06c-14e9-4646-a23a-c73a9b53a64d 00:16:59.798 Thin Provisioning: Not Supported 00:16:59.798 Per-NS Atomic Units: Yes 00:16:59.798 Atomic Boundary Size (Normal): 0 00:16:59.798 Atomic Boundary Size (PFail): 0 00:16:59.798 Atomic Boundary Offset: 0 00:16:59.798 Maximum Single Source Range Length: 65535 00:16:59.798 Maximum Copy Length: 65535 00:16:59.798 Maximum Source Range Count: 1 00:16:59.798 NGUID/EUI64 Never Reused: No 00:16:59.798 Namespace Write Protected: No 00:16:59.798 Number of LBA Formats: 1 00:16:59.798 Current LBA Format: LBA Format #00 00:16:59.798 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:59.798 00:16:59.798 23:30:20 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:59.798 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.056 Initializing NVMe Controllers 00:17:05.056 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:05.056 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:05.056 Initialization complete. Launching workers. 00:17:05.056 ======================================================== 00:17:05.056 Latency(us) 00:17:05.056 Device Information : IOPS MiB/s Average min max 00:17:05.056 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 37425.40 146.19 3420.35 1138.84 7402.43 00:17:05.056 ======================================================== 00:17:05.056 Total : 37425.40 146.19 3420.35 1138.84 7402.43 00:17:05.056 00:17:05.056 23:30:25 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:05.056 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.350 Initializing NVMe Controllers 00:17:10.350 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:10.350 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:10.350 Initialization complete. Launching workers. 00:17:10.350 ======================================================== 00:17:10.350 Latency(us) 00:17:10.350 Device Information : IOPS MiB/s Average min max 00:17:10.350 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7995.62 6992.13 14993.41 00:17:10.350 ======================================================== 00:17:10.350 Total : 16025.60 62.60 7995.62 6992.13 14993.41 00:17:10.350 00:17:10.350 23:30:31 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:10.607 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.872 Initializing NVMe Controllers 00:17:15.872 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:15.872 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:15.872 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:15.872 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:15.872 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:15.872 Initialization complete. Launching workers. 00:17:15.872 Starting thread on core 2 00:17:15.872 Starting thread on core 3 00:17:15.872 Starting thread on core 1 00:17:15.872 23:30:36 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:15.872 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.154 Initializing NVMe Controllers 00:17:19.154 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:19.154 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:19.154 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:19.154 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:19.154 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:19.154 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:19.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:19.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:19.154 Initialization complete. Launching workers. 00:17:19.154 Starting thread on core 1 with urgent priority queue 00:17:19.154 Starting thread on core 2 with urgent priority queue 00:17:19.154 Starting thread on core 3 with urgent priority queue 00:17:19.154 Starting thread on core 0 with urgent priority queue 00:17:19.154 SPDK bdev Controller (SPDK1 ) core 0: 5093.00 IO/s 19.63 secs/100000 ios 00:17:19.154 SPDK bdev Controller (SPDK1 ) core 1: 4754.00 IO/s 21.03 secs/100000 ios 00:17:19.154 SPDK bdev Controller (SPDK1 ) core 2: 5503.33 IO/s 18.17 secs/100000 ios 00:17:19.154 SPDK bdev Controller (SPDK1 ) core 3: 5561.33 IO/s 17.98 secs/100000 ios 00:17:19.154 ======================================================== 00:17:19.154 00:17:19.154 23:30:40 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:19.154 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.411 Initializing NVMe Controllers 00:17:19.411 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:19.411 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:19.411 Namespace ID: 1 size: 0GB 00:17:19.411 Initialization complete. 00:17:19.411 INFO: using host memory buffer for IO 00:17:19.411 Hello world! 00:17:19.411 23:30:40 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:19.668 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.040 Initializing NVMe Controllers 00:17:21.040 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:21.040 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:21.040 Initialization complete. Launching workers. 00:17:21.040 submit (in ns) avg, min, max = 10918.7, 3514.4, 4015068.9 00:17:21.040 complete (in ns) avg, min, max = 21425.5, 2033.3, 4016167.8 00:17:21.040 00:17:21.040 Submit histogram 00:17:21.040 ================ 00:17:21.040 Range in us Cumulative Count 00:17:21.040 3.508 - 3.532: 0.0583% ( 8) 00:17:21.040 3.532 - 3.556: 0.9550% ( 123) 00:17:21.040 3.556 - 3.579: 8.1213% ( 983) 00:17:21.040 3.579 - 3.603: 21.2291% ( 1798) 00:17:21.040 3.603 - 3.627: 36.1814% ( 2051) 00:17:21.041 3.627 - 3.650: 45.8263% ( 1323) 00:17:21.041 3.650 - 3.674: 50.7400% ( 674) 00:17:21.041 3.674 - 3.698: 55.2818% ( 623) 00:17:21.041 3.698 - 3.721: 60.1225% ( 664) 00:17:21.041 3.721 - 3.745: 63.4978% ( 463) 00:17:21.041 3.745 - 3.769: 65.6485% ( 295) 00:17:21.041 3.769 - 3.793: 67.4710% ( 250) 00:17:21.041 3.793 - 3.816: 71.5098% ( 554) 00:17:21.041 3.816 - 3.840: 77.1306% ( 771) 00:17:21.041 3.840 - 3.864: 82.4233% ( 726) 00:17:21.041 3.864 - 3.887: 85.1936% ( 380) 00:17:21.041 3.887 - 3.911: 87.2275% ( 279) 00:17:21.041 3.911 - 3.935: 88.8897% ( 228) 00:17:21.041 3.935 - 3.959: 90.3623% ( 202) 00:17:21.041 3.959 - 3.982: 91.4996% ( 156) 00:17:21.041 3.982 - 4.006: 92.2432% ( 102) 00:17:21.041 4.006 - 4.030: 92.9139% ( 92) 00:17:21.041 4.030 - 4.053: 93.7596% ( 116) 00:17:21.041 4.053 - 4.077: 94.6490% ( 122) 00:17:21.041 4.077 - 4.101: 95.1666% ( 71) 00:17:21.041 4.101 - 4.124: 95.5019% ( 46) 00:17:21.041 4.124 - 4.148: 95.8664% ( 50) 00:17:21.041 4.148 - 4.172: 96.0851% ( 30) 00:17:21.041 4.172 - 4.196: 96.3039% ( 30) 00:17:21.041 4.196 - 4.219: 96.4424% ( 19) 00:17:21.041 4.219 - 4.243: 96.6028% ( 22) 00:17:21.041 4.243 - 4.267: 96.6902% ( 12) 00:17:21.041 4.267 - 4.290: 96.8798% ( 26) 00:17:21.041 4.290 - 4.314: 97.0183% ( 19) 00:17:21.041 4.314 - 4.338: 97.1058% ( 12) 00:17:21.041 4.338 - 4.361: 97.1714% ( 9) 00:17:21.041 4.361 - 4.385: 97.2151% ( 6) 00:17:21.041 4.385 - 4.409: 97.2662% ( 7) 00:17:21.041 4.409 - 4.433: 97.2807% ( 2) 00:17:21.041 4.433 - 4.456: 97.2953% ( 2) 00:17:21.041 4.456 - 4.480: 97.3391% ( 6) 00:17:21.041 4.504 - 4.527: 97.3464% ( 1) 00:17:21.041 4.575 - 4.599: 97.3536% ( 1) 00:17:21.041 4.599 - 4.622: 97.3609% ( 1) 00:17:21.041 4.622 - 4.646: 97.3682% ( 1) 00:17:21.041 4.646 - 4.670: 97.3755% ( 1) 00:17:21.041 4.670 - 4.693: 97.3974% ( 3) 00:17:21.041 4.693 - 4.717: 97.4266% ( 4) 00:17:21.041 4.717 - 4.741: 97.4484% ( 3) 00:17:21.041 4.741 - 4.764: 97.4995% ( 7) 00:17:21.041 4.764 - 4.788: 97.5213% ( 3) 00:17:21.041 4.788 - 4.812: 97.5505% ( 4) 00:17:21.041 4.812 - 4.836: 97.5942% ( 6) 00:17:21.041 4.836 - 4.859: 97.6453% ( 7) 00:17:21.041 4.859 - 4.883: 97.6671% ( 3) 00:17:21.041 4.883 - 4.907: 97.7182% ( 7) 00:17:21.041 4.907 - 4.930: 97.7400% ( 3) 00:17:21.041 4.930 - 4.954: 97.7546% ( 2) 00:17:21.041 4.954 - 4.978: 97.7984% ( 6) 00:17:21.041 4.978 - 5.001: 97.8494% ( 7) 00:17:21.041 5.001 - 5.025: 97.8640% ( 2) 00:17:21.041 5.025 - 5.049: 97.8785% ( 2) 00:17:21.041 5.049 - 5.073: 97.9223% ( 6) 00:17:21.041 5.073 - 5.096: 97.9442% ( 3) 00:17:21.041 5.096 - 5.120: 97.9733% ( 4) 00:17:21.041 5.120 - 5.144: 97.9879% ( 2) 00:17:21.041 5.144 - 5.167: 98.0171% ( 4) 00:17:21.041 5.167 - 5.191: 98.0389% ( 3) 00:17:21.041 5.191 - 5.215: 98.0535% ( 2) 00:17:21.041 5.310 - 5.333: 98.0681% ( 2) 00:17:21.041 5.333 - 5.357: 98.0754% ( 1) 00:17:21.041 5.381 - 5.404: 98.0827% ( 1) 00:17:21.041 5.547 - 5.570: 98.0973% ( 2) 00:17:21.041 5.570 - 5.594: 98.1045% ( 1) 00:17:21.041 5.641 - 5.665: 98.1118% ( 1) 00:17:21.041 6.116 - 6.163: 98.1191% ( 1) 00:17:21.041 6.163 - 6.210: 98.1264% ( 1) 00:17:21.041 6.353 - 6.400: 98.1337% ( 1) 00:17:21.041 6.969 - 7.016: 98.1410% ( 1) 00:17:21.041 7.206 - 7.253: 98.1483% ( 1) 00:17:21.041 7.443 - 7.490: 98.1629% ( 2) 00:17:21.041 7.490 - 7.538: 98.1702% ( 1) 00:17:21.041 7.538 - 7.585: 98.1774% ( 1) 00:17:21.041 7.585 - 7.633: 98.1847% ( 1) 00:17:21.041 7.680 - 7.727: 98.1993% ( 2) 00:17:21.041 7.775 - 7.822: 98.2139% ( 2) 00:17:21.041 7.822 - 7.870: 98.2212% ( 1) 00:17:21.041 7.870 - 7.917: 98.2358% ( 2) 00:17:21.041 7.917 - 7.964: 98.2431% ( 1) 00:17:21.041 7.964 - 8.012: 98.2503% ( 1) 00:17:21.041 8.012 - 8.059: 98.2795% ( 4) 00:17:21.041 8.059 - 8.107: 98.2868% ( 1) 00:17:21.041 8.107 - 8.154: 98.3014% ( 2) 00:17:21.041 8.296 - 8.344: 98.3160% ( 2) 00:17:21.041 8.344 - 8.391: 98.3232% ( 1) 00:17:21.041 8.391 - 8.439: 98.3451% ( 3) 00:17:21.041 8.439 - 8.486: 98.3816% ( 5) 00:17:21.041 8.486 - 8.533: 98.4034% ( 3) 00:17:21.041 8.533 - 8.581: 98.4107% ( 1) 00:17:21.041 8.581 - 8.628: 98.4180% ( 1) 00:17:21.041 8.676 - 8.723: 98.4399% ( 3) 00:17:21.041 8.818 - 8.865: 98.4472% ( 1) 00:17:21.041 8.960 - 9.007: 98.4618% ( 2) 00:17:21.041 9.007 - 9.055: 98.4691% ( 1) 00:17:21.041 9.055 - 9.102: 98.4836% ( 2) 00:17:21.041 9.102 - 9.150: 98.4909% ( 1) 00:17:21.041 9.150 - 9.197: 98.4982% ( 1) 00:17:21.041 9.244 - 9.292: 98.5055% ( 1) 00:17:21.041 9.339 - 9.387: 98.5128% ( 1) 00:17:21.041 9.387 - 9.434: 98.5201% ( 1) 00:17:21.041 9.434 - 9.481: 98.5274% ( 1) 00:17:21.041 9.529 - 9.576: 98.5347% ( 1) 00:17:21.041 9.576 - 9.624: 98.5492% ( 2) 00:17:21.041 9.671 - 9.719: 98.5565% ( 1) 00:17:21.041 9.766 - 9.813: 98.5711% ( 2) 00:17:21.041 10.098 - 10.145: 98.5857% ( 2) 00:17:21.041 10.193 - 10.240: 98.5930% ( 1) 00:17:21.041 10.287 - 10.335: 98.6003% ( 1) 00:17:21.041 10.430 - 10.477: 98.6076% ( 1) 00:17:21.041 10.477 - 10.524: 98.6221% ( 2) 00:17:21.041 10.572 - 10.619: 98.6367% ( 2) 00:17:21.041 10.619 - 10.667: 98.6440% ( 1) 00:17:21.041 10.809 - 10.856: 98.6513% ( 1) 00:17:21.041 10.856 - 10.904: 98.6586% ( 1) 00:17:21.041 10.951 - 10.999: 98.6659% ( 1) 00:17:21.041 11.093 - 11.141: 98.6732% ( 1) 00:17:21.042 11.141 - 11.188: 98.6805% ( 1) 00:17:21.042 11.236 - 11.283: 98.6878% ( 1) 00:17:21.042 11.615 - 11.662: 98.6950% ( 1) 00:17:21.042 12.231 - 12.326: 98.7023% ( 1) 00:17:21.042 12.516 - 12.610: 98.7096% ( 1) 00:17:21.042 12.610 - 12.705: 98.7169% ( 1) 00:17:21.042 12.800 - 12.895: 98.7461% ( 4) 00:17:21.042 12.990 - 13.084: 98.7534% ( 1) 00:17:21.042 13.084 - 13.179: 98.7680% ( 2) 00:17:21.042 13.369 - 13.464: 98.7825% ( 2) 00:17:21.042 13.464 - 13.559: 98.7898% ( 1) 00:17:21.042 13.748 - 13.843: 98.7971% ( 1) 00:17:21.042 13.843 - 13.938: 98.8117% ( 2) 00:17:21.042 14.127 - 14.222: 98.8190% ( 1) 00:17:21.042 14.696 - 14.791: 98.8263% ( 1) 00:17:21.042 14.886 - 14.981: 98.8336% ( 1) 00:17:21.042 15.076 - 15.170: 98.8409% ( 1) 00:17:21.042 17.161 - 17.256: 98.8481% ( 1) 00:17:21.042 17.256 - 17.351: 98.8700% ( 3) 00:17:21.042 17.351 - 17.446: 98.8846% ( 2) 00:17:21.042 17.446 - 17.541: 98.8992% ( 2) 00:17:21.042 17.541 - 17.636: 98.9283% ( 4) 00:17:21.042 17.636 - 17.730: 99.0012% ( 10) 00:17:21.042 17.730 - 17.825: 99.0231% ( 3) 00:17:21.042 17.825 - 17.920: 99.0814% ( 8) 00:17:21.042 17.920 - 18.015: 99.1398% ( 8) 00:17:21.042 18.015 - 18.110: 99.2345% ( 13) 00:17:21.042 18.110 - 18.204: 99.2856% ( 7) 00:17:21.042 18.204 - 18.299: 99.3366% ( 7) 00:17:21.042 18.299 - 18.394: 99.4532% ( 16) 00:17:21.042 18.394 - 18.489: 99.5116% ( 8) 00:17:21.042 18.489 - 18.584: 99.5480% ( 5) 00:17:21.042 18.584 - 18.679: 99.5990% ( 7) 00:17:21.042 18.679 - 18.773: 99.6282% ( 4) 00:17:21.042 18.773 - 18.868: 99.6501% ( 3) 00:17:21.042 18.868 - 18.963: 99.6865% ( 5) 00:17:21.042 18.963 - 19.058: 99.7084% ( 3) 00:17:21.042 19.058 - 19.153: 99.7230% ( 2) 00:17:21.042 19.153 - 19.247: 99.7376% ( 2) 00:17:21.042 19.816 - 19.911: 99.7594% ( 3) 00:17:21.042 19.911 - 20.006: 99.7667% ( 1) 00:17:21.042 20.290 - 20.385: 99.7740% ( 1) 00:17:21.042 22.566 - 22.661: 99.7813% ( 1) 00:17:21.042 23.419 - 23.514: 99.7886% ( 1) 00:17:21.042 23.514 - 23.609: 99.7959% ( 1) 00:17:21.042 24.462 - 24.652: 99.8032% ( 1) 00:17:21.042 24.841 - 25.031: 99.8105% ( 1) 00:17:21.042 25.600 - 25.790: 99.8177% ( 1) 00:17:21.042 27.496 - 27.686: 99.8250% ( 1) 00:17:21.042 3980.705 - 4004.978: 99.9708% ( 20) 00:17:21.042 4004.978 - 4029.250: 100.0000% ( 4) 00:17:21.042 00:17:21.042 Complete histogram 00:17:21.042 ================== 00:17:21.042 Range in us Cumulative Count 00:17:21.042 2.027 - 2.039: 0.0948% ( 13) 00:17:21.042 2.039 - 2.050: 15.9729% ( 2178) 00:17:21.042 2.050 - 2.062: 29.4161% ( 1844) 00:17:21.042 2.062 - 2.074: 33.4548% ( 554) 00:17:21.042 2.074 - 2.086: 56.3097% ( 3135) 00:17:21.042 2.086 - 2.098: 62.7761% ( 887) 00:17:21.042 2.098 - 2.110: 64.8247% ( 281) 00:17:21.042 2.110 - 2.121: 69.9861% ( 708) 00:17:21.042 2.121 - 2.133: 71.1745% ( 163) 00:17:21.042 2.133 - 2.145: 74.8123% ( 499) 00:17:21.042 2.145 - 2.157: 81.7380% ( 950) 00:17:21.042 2.157 - 2.169: 83.4366% ( 233) 00:17:21.042 2.169 - 2.181: 84.9821% ( 212) 00:17:21.042 2.181 - 2.193: 87.4608% ( 340) 00:17:21.042 2.193 - 2.204: 88.6856% ( 168) 00:17:21.042 2.204 - 2.216: 90.9455% ( 310) 00:17:21.042 2.216 - 2.228: 93.8252% ( 395) 00:17:21.042 2.228 - 2.240: 94.3647% ( 74) 00:17:21.042 2.240 - 2.252: 94.8896% ( 72) 00:17:21.042 2.252 - 2.264: 95.2395% ( 48) 00:17:21.042 2.264 - 2.276: 95.4436% ( 28) 00:17:21.042 2.276 - 2.287: 95.8810% ( 60) 00:17:21.042 2.287 - 2.299: 95.9831% ( 14) 00:17:21.042 2.299 - 2.311: 96.0341% ( 7) 00:17:21.042 2.311 - 2.323: 96.1508% ( 16) 00:17:21.042 2.323 - 2.335: 96.3840% ( 32) 00:17:21.042 2.335 - 2.347: 96.5663% ( 25) 00:17:21.042 2.347 - 2.359: 96.9454% ( 52) 00:17:21.042 2.359 - 2.370: 97.2735% ( 45) 00:17:21.042 2.370 - 2.382: 97.4266% ( 21) 00:17:21.042 2.382 - 2.394: 97.6525% ( 31) 00:17:21.042 2.394 - 2.406: 97.8494% ( 27) 00:17:21.042 2.406 - 2.418: 97.9442% ( 13) 00:17:21.042 2.418 - 2.430: 98.0316% ( 12) 00:17:21.042 2.430 - 2.441: 98.1702% ( 19) 00:17:21.042 2.441 - 2.453: 98.2139% ( 6) 00:17:21.042 2.453 - 2.465: 98.3232% ( 15) 00:17:21.042 2.465 - 2.477: 98.3670% ( 6) 00:17:21.042 2.477 - 2.489: 98.4034% ( 5) 00:17:21.042 2.489 - 2.501: 98.4399% ( 5) 00:17:21.042 2.501 - 2.513: 98.4763% ( 5) 00:17:21.042 2.513 - 2.524: 98.5128% ( 5) 00:17:21.042 2.524 - 2.536: 98.5274% ( 2) 00:17:21.042 2.536 - 2.548: 98.5347% ( 1) 00:17:21.042 2.572 - 2.584: 98.5492% ( 2) 00:17:21.042 2.667 - 2.679: 98.5638% ( 2) 00:17:21.042 3.200 - 3.224: 98.5857% ( 3) 00:17:21.042 3.247 - 3.271: 98.6003% ( 2) 00:17:21.042 3.271 - 3.295: 98.6221% ( 3) 00:17:21.042 3.295 - 3.319: 98.6440% ( 3) 00:17:21.042 3.319 - 3.342: 98.6586% ( 2) 00:17:21.042 3.342 - 3.366: 98.6659% ( 1) 00:17:21.042 3.366 - 3.390: 98.6878% ( 3) 00:17:21.042 3.461 - 3.484: 98.7023% ( 2) 00:17:21.042 3.556 - 3.579: 98.7242% ( 3) 00:17:21.042 3.603 - 3.627: 98.7315% ( 1) 00:17:21.042 3.627 - 3.650: 98.7388% ( 1) 00:17:21.042 3.721 - 3.745: 98.7461% ( 1) 00:17:21.042 3.793 - 3.816: 98.7607% ( 2) 00:17:21.042 3.816 - 3.840: 98.7680% ( 1) 00:17:21.042 3.840 - 3.864: 98.7752% ( 1) 00:17:21.042 3.935 - 3.959: 98.7825% ( 1) 00:17:21.042 5.381 - 5.404: 98.7898% ( 1) 00:17:21.042 5.476 - 5.499: 98.7971% ( 1) 00:17:21.042 5.926 - 5.950: 98.8044% ( 1) 00:17:21.042 6.163 - 6.210: 98.8117% ( 1) 00:17:21.042 6.210 - 6.258: 98.8190% ( 1) 00:17:21.042 6.353 - 6.400: 98.8263% ( 1) 00:17:21.043 6.495 - 6.542: 98.8336% ( 1) 00:17:21.043 6.637 - 6.684: 98.8409% ( 1) 00:17:21.043 7.301 - 7.348: 98.8554% ( 2) 00:17:21.043 7.396 - 7.443: 98.8627% ( 1) 00:17:21.043 7.490 - 7.538: 98.8700% ( 1) 00:17:21.043 7.917 - 7.964: 98.8773% ( 1) 00:17:21.043 8.439 - 8.486: 98.8846% ( 1) 00:17:21.043 8.628 - 8.676: 98.8919% ( 1) 00:17:21.043 8.723 - 8.770: 98.8992% ( 1) 00:17:21.043 8.770 - 8.818: 98.9065% ( 1) 00:17:21.043 8.960 - 9.007: 98.9138% ( 1) 00:17:21.043 9.055 - 9.102: 98.9210% ( 1) 00:17:21.043 15.455 - 15.550: 98.9356% ( 2) 00:17:21.043 15.550 - 15.644: 98.9502% ( 2) 00:17:21.043 15.739 - 15.834: 98.9575% ( 1) 00:17:21.043 15.834 - 15.929: 98.9867% ( 4) 00:17:21.043 15.929 - 16.024: 99.0012% ( 2) 00:17:21.043 16.024 - 16.119: 99.0377% ( 5) 00:17:21.043 16.119 - 16.213: 99.0741% ( 5) 00:17:21.043 16.213 - 16.308: 99.0887% ( 2) 00:17:21.043 16.308 - 16.403: 99.1543% ( 9) 00:17:21.043 16.403 - 16.498: 99.1689% ( 2) 00:17:21.043 16.498 - 16.593: 99.2199% ( 7) 00:17:21.043 16.593 - 16.687: 99.2710% ( 7) 00:17:21.043 16.687 - 16.782: 99.3074% ( 5) 00:17:21.043 16.782 - 16.877: 99.3439% ( 5) 00:17:21.043 16.877 - 16.972: 99.3658% ( 3) 00:17:21.043 16.972 - 17.067: 99.3876% ( 3) 00:17:21.043 17.067 - 17.161: 99.4095% ( 3) 00:17:21.043 17.161 - 17.256: 99.4168% ( 1) 00:17:21.043 17.256 - 17.351: 99.4314% ( 2) 00:17:21.043 17.351 - 17.446: 99.4387% ( 1) 00:17:21.043 17.636 - 17.730: 99.4532% ( 2) 00:17:21.043 17.730 - 17.825: 99.4605% ( 1) 00:17:21.043 17.920 - 18.015: 99.4678% ( 1) 00:17:21.043 18.015 - 18.110: 99.4824% ( 2) 00:17:21.043 18.110 - 18.204: 99.4897% ( 1) 00:17:21.043 18.299 - 18.394: 99.4970% ( 1) 00:17:21.043 18.394 - 18.489: 99.5043% ( 1) 00:17:21.043 21.523 - 21.618: 99.5116% ( 1) 00:17:21.043 23.324 - 23.419: 99.5188% ( 1) 00:17:21.043 3568.071 - 3592.344: 99.5261% ( 1) 00:17:21.043 3980.705 - 4004.978: 99.8979% ( 51) 00:17:21.043 4004.978 - 4029.250: 100.0000% ( 14) 00:17:21.043 00:17:21.043 23:30:41 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:21.043 23:30:41 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:21.043 23:30:41 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:21.043 23:30:41 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:21.043 23:30:41 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:21.300 [2024-07-11 23:30:42.074736] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:21.300 [ 00:17:21.300 { 00:17:21.300 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:21.300 "subtype": "Discovery", 00:17:21.300 "listen_addresses": [], 00:17:21.300 "allow_any_host": true, 00:17:21.300 "hosts": [] 00:17:21.300 }, 00:17:21.300 { 00:17:21.300 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:21.300 "subtype": "NVMe", 00:17:21.300 "listen_addresses": [ 00:17:21.300 { 00:17:21.300 "transport": "VFIOUSER", 00:17:21.300 "trtype": "VFIOUSER", 00:17:21.300 "adrfam": "IPv4", 00:17:21.300 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:21.300 "trsvcid": "0" 00:17:21.300 } 00:17:21.300 ], 00:17:21.300 "allow_any_host": true, 00:17:21.300 "hosts": [], 00:17:21.301 "serial_number": "SPDK1", 00:17:21.301 "model_number": "SPDK bdev Controller", 00:17:21.301 "max_namespaces": 32, 00:17:21.301 "min_cntlid": 1, 00:17:21.301 "max_cntlid": 65519, 00:17:21.301 "namespaces": [ 00:17:21.301 { 00:17:21.301 "nsid": 1, 00:17:21.301 "bdev_name": "Malloc1", 00:17:21.301 "name": "Malloc1", 00:17:21.301 "nguid": "A4C9D06C14E94646A23AC73A9B53A64D", 00:17:21.301 "uuid": "a4c9d06c-14e9-4646-a23a-c73a9b53a64d" 00:17:21.301 } 00:17:21.301 ] 00:17:21.301 }, 00:17:21.301 { 00:17:21.301 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:21.301 "subtype": "NVMe", 00:17:21.301 "listen_addresses": [ 00:17:21.301 { 00:17:21.301 "transport": "VFIOUSER", 00:17:21.301 "trtype": "VFIOUSER", 00:17:21.301 "adrfam": "IPv4", 00:17:21.301 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:21.301 "trsvcid": "0" 00:17:21.301 } 00:17:21.301 ], 00:17:21.301 "allow_any_host": true, 00:17:21.301 "hosts": [], 00:17:21.301 "serial_number": "SPDK2", 00:17:21.301 "model_number": "SPDK bdev Controller", 00:17:21.301 "max_namespaces": 32, 00:17:21.301 "min_cntlid": 1, 00:17:21.301 "max_cntlid": 65519, 00:17:21.301 "namespaces": [ 00:17:21.301 { 00:17:21.301 "nsid": 1, 00:17:21.301 "bdev_name": "Malloc2", 00:17:21.301 "name": "Malloc2", 00:17:21.301 "nguid": "442A6CD576404F9D8C1A742DBA0F9386", 00:17:21.301 "uuid": "442a6cd5-7640-4f9d-8c1a-742dba0f9386" 00:17:21.301 } 00:17:21.301 ] 00:17:21.301 } 00:17:21.301 ] 00:17:21.301 23:30:42 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:21.301 23:30:42 -- target/nvmf_vfio_user.sh@34 -- # aerpid=231274 00:17:21.301 23:30:42 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:21.301 23:30:42 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:21.301 23:30:42 -- common/autotest_common.sh@1244 -- # local i=0 00:17:21.301 23:30:42 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:21.301 23:30:42 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:21.301 23:30:42 -- common/autotest_common.sh@1255 -- # return 0 00:17:21.301 23:30:42 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:21.301 23:30:42 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:21.301 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.867 Malloc3 00:17:21.867 23:30:42 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:22.431 23:30:43 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:22.431 Asynchronous Event Request test 00:17:22.431 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:22.431 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:22.431 Registering asynchronous event callbacks... 00:17:22.431 Starting namespace attribute notice tests for all controllers... 00:17:22.431 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:22.431 aer_cb - Changed Namespace 00:17:22.431 Cleaning up... 00:17:22.688 [ 00:17:22.688 { 00:17:22.688 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:22.688 "subtype": "Discovery", 00:17:22.688 "listen_addresses": [], 00:17:22.688 "allow_any_host": true, 00:17:22.688 "hosts": [] 00:17:22.688 }, 00:17:22.688 { 00:17:22.688 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:22.688 "subtype": "NVMe", 00:17:22.688 "listen_addresses": [ 00:17:22.688 { 00:17:22.688 "transport": "VFIOUSER", 00:17:22.688 "trtype": "VFIOUSER", 00:17:22.688 "adrfam": "IPv4", 00:17:22.688 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:22.688 "trsvcid": "0" 00:17:22.688 } 00:17:22.688 ], 00:17:22.688 "allow_any_host": true, 00:17:22.688 "hosts": [], 00:17:22.688 "serial_number": "SPDK1", 00:17:22.688 "model_number": "SPDK bdev Controller", 00:17:22.688 "max_namespaces": 32, 00:17:22.688 "min_cntlid": 1, 00:17:22.688 "max_cntlid": 65519, 00:17:22.688 "namespaces": [ 00:17:22.688 { 00:17:22.688 "nsid": 1, 00:17:22.688 "bdev_name": "Malloc1", 00:17:22.688 "name": "Malloc1", 00:17:22.688 "nguid": "A4C9D06C14E94646A23AC73A9B53A64D", 00:17:22.688 "uuid": "a4c9d06c-14e9-4646-a23a-c73a9b53a64d" 00:17:22.688 }, 00:17:22.688 { 00:17:22.688 "nsid": 2, 00:17:22.688 "bdev_name": "Malloc3", 00:17:22.688 "name": "Malloc3", 00:17:22.688 "nguid": "7A80122512CB44C18513A71D9BCC0021", 00:17:22.688 "uuid": "7a801225-12cb-44c1-8513-a71d9bcc0021" 00:17:22.688 } 00:17:22.688 ] 00:17:22.688 }, 00:17:22.688 { 00:17:22.688 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:22.688 "subtype": "NVMe", 00:17:22.688 "listen_addresses": [ 00:17:22.688 { 00:17:22.688 "transport": "VFIOUSER", 00:17:22.688 "trtype": "VFIOUSER", 00:17:22.688 "adrfam": "IPv4", 00:17:22.688 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:22.688 "trsvcid": "0" 00:17:22.688 } 00:17:22.688 ], 00:17:22.688 "allow_any_host": true, 00:17:22.688 "hosts": [], 00:17:22.688 "serial_number": "SPDK2", 00:17:22.688 "model_number": "SPDK bdev Controller", 00:17:22.688 "max_namespaces": 32, 00:17:22.688 "min_cntlid": 1, 00:17:22.688 "max_cntlid": 65519, 00:17:22.688 "namespaces": [ 00:17:22.688 { 00:17:22.688 "nsid": 1, 00:17:22.688 "bdev_name": "Malloc2", 00:17:22.688 "name": "Malloc2", 00:17:22.688 "nguid": "442A6CD576404F9D8C1A742DBA0F9386", 00:17:22.688 "uuid": "442a6cd5-7640-4f9d-8c1a-742dba0f9386" 00:17:22.688 } 00:17:22.688 ] 00:17:22.688 } 00:17:22.688 ] 00:17:22.688 23:30:43 -- target/nvmf_vfio_user.sh@44 -- # wait 231274 00:17:22.688 23:30:43 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:22.688 23:30:43 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:22.688 23:30:43 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:22.688 23:30:43 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:22.688 [2024-07-11 23:30:43.609398] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:22.688 [2024-07-11 23:30:43.609505] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231426 ] 00:17:22.688 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.947 [2024-07-11 23:30:43.663344] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:22.947 [2024-07-11 23:30:43.671443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:22.947 [2024-07-11 23:30:43.671473] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd75fc31000 00:17:22.947 [2024-07-11 23:30:43.672430] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:22.947 [2024-07-11 23:30:43.673437] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:22.947 [2024-07-11 23:30:43.674439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:22.947 [2024-07-11 23:30:43.675459] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:22.947 [2024-07-11 23:30:43.676466] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:22.947 [2024-07-11 23:30:43.677475] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:22.947 [2024-07-11 23:30:43.678485] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:22.947 [2024-07-11 23:30:43.679490] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:22.947 [2024-07-11 23:30:43.680503] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:22.947 [2024-07-11 23:30:43.680525] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd75e9e5000 00:17:22.947 [2024-07-11 23:30:43.681642] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:22.947 [2024-07-11 23:30:43.697802] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:22.947 [2024-07-11 23:30:43.697837] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:22.947 [2024-07-11 23:30:43.702946] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:22.947 [2024-07-11 23:30:43.703003] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:22.947 [2024-07-11 23:30:43.703098] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:22.947 [2024-07-11 23:30:43.703148] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:22.947 [2024-07-11 23:30:43.703162] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:22.947 [2024-07-11 23:30:43.703950] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:22.947 [2024-07-11 23:30:43.703970] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:22.947 [2024-07-11 23:30:43.703982] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:22.947 [2024-07-11 23:30:43.704953] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:22.947 [2024-07-11 23:30:43.704972] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:22.947 [2024-07-11 23:30:43.704987] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:22.947 [2024-07-11 23:30:43.705966] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:22.947 [2024-07-11 23:30:43.705987] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:22.947 [2024-07-11 23:30:43.706972] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:22.947 [2024-07-11 23:30:43.706992] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:22.947 [2024-07-11 23:30:43.707002] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:22.947 [2024-07-11 23:30:43.707013] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:22.947 [2024-07-11 23:30:43.707136] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:22.947 [2024-07-11 23:30:43.707153] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:22.947 [2024-07-11 23:30:43.707162] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:22.947 [2024-07-11 23:30:43.707976] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:22.947 [2024-07-11 23:30:43.708993] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:22.947 [2024-07-11 23:30:43.709998] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:22.947 [2024-07-11 23:30:43.711021] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:22.947 [2024-07-11 23:30:43.712003] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:22.947 [2024-07-11 23:30:43.712023] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:22.947 [2024-07-11 23:30:43.712039] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:22.947 [2024-07-11 23:30:43.712063] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:22.947 [2024-07-11 23:30:43.712080] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:22.947 [2024-07-11 23:30:43.712101] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:22.947 [2024-07-11 23:30:43.712110] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:22.947 [2024-07-11 23:30:43.712152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:22.948 [2024-07-11 23:30:43.716159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:22.948 [2024-07-11 23:30:43.716185] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:22.948 [2024-07-11 23:30:43.716195] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:22.948 [2024-07-11 23:30:43.716203] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:22.948 [2024-07-11 23:30:43.716211] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:22.948 [2024-07-11 23:30:43.716219] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:22.948 [2024-07-11 23:30:43.716227] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:22.948 [2024-07-11 23:30:43.716236] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.716254] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.716272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:22.948 [2024-07-11 23:30:43.724151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:22.948 [2024-07-11 23:30:43.724181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:22.948 [2024-07-11 23:30:43.724196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:22.948 [2024-07-11 23:30:43.724208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:22.948 [2024-07-11 23:30:43.724220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:22.948 [2024-07-11 23:30:43.724229] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.724245] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.724259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:22.948 [2024-07-11 23:30:43.732153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:22.948 [2024-07-11 23:30:43.732173] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:22.948 [2024-07-11 23:30:43.732187] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.732199] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.732213] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.732229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:22.948 [2024-07-11 23:30:43.740151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:22.948 [2024-07-11 23:30:43.740223] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.740238] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.740252] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:22.948 [2024-07-11 23:30:43.740261] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:22.948 [2024-07-11 23:30:43.740271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:22.948 [2024-07-11 23:30:43.748150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:22.948 [2024-07-11 23:30:43.748180] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:22.948 [2024-07-11 23:30:43.748197] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.748212] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.748225] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:22.948 [2024-07-11 23:30:43.748234] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:22.948 [2024-07-11 23:30:43.748244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:22.948 [2024-07-11 23:30:43.756149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:22.948 [2024-07-11 23:30:43.756179] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.756196] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.756209] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:22.948 [2024-07-11 23:30:43.756218] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:22.948 [2024-07-11 23:30:43.756228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:22.948 [2024-07-11 23:30:43.764152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:22.948 [2024-07-11 23:30:43.764173] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.764191] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.764207] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.764219] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.764228] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.764237] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:22.948 [2024-07-11 23:30:43.764245] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:22.948 [2024-07-11 23:30:43.764254] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:22.948 [2024-07-11 23:30:43.764280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:22.948 [2024-07-11 23:30:43.772149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:22.948 [2024-07-11 23:30:43.772176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:22.948 [2024-07-11 23:30:43.780164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:22.948 [2024-07-11 23:30:43.780189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:22.948 [2024-07-11 23:30:43.788151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:22.948 [2024-07-11 23:30:43.788176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:22.948 [2024-07-11 23:30:43.796155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:22.948 [2024-07-11 23:30:43.796181] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:22.948 [2024-07-11 23:30:43.796191] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:22.948 [2024-07-11 23:30:43.796197] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:22.948 [2024-07-11 23:30:43.796204] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:22.948 [2024-07-11 23:30:43.796214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:22.948 [2024-07-11 23:30:43.796225] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:22.948 [2024-07-11 23:30:43.796233] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:22.948 [2024-07-11 23:30:43.796242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:22.948 [2024-07-11 23:30:43.796253] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:22.948 [2024-07-11 23:30:43.796261] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:22.948 [2024-07-11 23:30:43.796270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:22.948 [2024-07-11 23:30:43.796282] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:22.948 [2024-07-11 23:30:43.796294] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:22.948 [2024-07-11 23:30:43.796304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:22.948 [2024-07-11 23:30:43.804149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:22.948 [2024-07-11 23:30:43.804190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:22.948 [2024-07-11 23:30:43.804207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:22.948 [2024-07-11 23:30:43.804219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:22.948 ===================================================== 00:17:22.948 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:22.948 ===================================================== 00:17:22.948 Controller Capabilities/Features 00:17:22.948 ================================ 00:17:22.948 Vendor ID: 4e58 00:17:22.948 Subsystem Vendor ID: 4e58 00:17:22.948 Serial Number: SPDK2 00:17:22.948 Model Number: SPDK bdev Controller 00:17:22.948 Firmware Version: 24.01.1 00:17:22.948 Recommended Arb Burst: 6 00:17:22.948 IEEE OUI Identifier: 8d 6b 50 00:17:22.948 Multi-path I/O 00:17:22.948 May have multiple subsystem ports: Yes 00:17:22.948 May have multiple controllers: Yes 00:17:22.948 Associated with SR-IOV VF: No 00:17:22.948 Max Data Transfer Size: 131072 00:17:22.948 Max Number of Namespaces: 32 00:17:22.948 Max Number of I/O Queues: 127 00:17:22.948 NVMe Specification Version (VS): 1.3 00:17:22.948 NVMe Specification Version (Identify): 1.3 00:17:22.948 Maximum Queue Entries: 256 00:17:22.949 Contiguous Queues Required: Yes 00:17:22.949 Arbitration Mechanisms Supported 00:17:22.949 Weighted Round Robin: Not Supported 00:17:22.949 Vendor Specific: Not Supported 00:17:22.949 Reset Timeout: 15000 ms 00:17:22.949 Doorbell Stride: 4 bytes 00:17:22.949 NVM Subsystem Reset: Not Supported 00:17:22.949 Command Sets Supported 00:17:22.949 NVM Command Set: Supported 00:17:22.949 Boot Partition: Not Supported 00:17:22.949 Memory Page Size Minimum: 4096 bytes 00:17:22.949 Memory Page Size Maximum: 4096 bytes 00:17:22.949 Persistent Memory Region: Not Supported 00:17:22.949 Optional Asynchronous Events Supported 00:17:22.949 Namespace Attribute Notices: Supported 00:17:22.949 Firmware Activation Notices: Not Supported 00:17:22.949 ANA Change Notices: Not Supported 00:17:22.949 PLE Aggregate Log Change Notices: Not Supported 00:17:22.949 LBA Status Info Alert Notices: Not Supported 00:17:22.949 EGE Aggregate Log Change Notices: Not Supported 00:17:22.949 Normal NVM Subsystem Shutdown event: Not Supported 00:17:22.949 Zone Descriptor Change Notices: Not Supported 00:17:22.949 Discovery Log Change Notices: Not Supported 00:17:22.949 Controller Attributes 00:17:22.949 128-bit Host Identifier: Supported 00:17:22.949 Non-Operational Permissive Mode: Not Supported 00:17:22.949 NVM Sets: Not Supported 00:17:22.949 Read Recovery Levels: Not Supported 00:17:22.949 Endurance Groups: Not Supported 00:17:22.949 Predictable Latency Mode: Not Supported 00:17:22.949 Traffic Based Keep ALive: Not Supported 00:17:22.949 Namespace Granularity: Not Supported 00:17:22.949 SQ Associations: Not Supported 00:17:22.949 UUID List: Not Supported 00:17:22.949 Multi-Domain Subsystem: Not Supported 00:17:22.949 Fixed Capacity Management: Not Supported 00:17:22.949 Variable Capacity Management: Not Supported 00:17:22.949 Delete Endurance Group: Not Supported 00:17:22.949 Delete NVM Set: Not Supported 00:17:22.949 Extended LBA Formats Supported: Not Supported 00:17:22.949 Flexible Data Placement Supported: Not Supported 00:17:22.949 00:17:22.949 Controller Memory Buffer Support 00:17:22.949 ================================ 00:17:22.949 Supported: No 00:17:22.949 00:17:22.949 Persistent Memory Region Support 00:17:22.949 ================================ 00:17:22.949 Supported: No 00:17:22.949 00:17:22.949 Admin Command Set Attributes 00:17:22.949 ============================ 00:17:22.949 Security Send/Receive: Not Supported 00:17:22.949 Format NVM: Not Supported 00:17:22.949 Firmware Activate/Download: Not Supported 00:17:22.949 Namespace Management: Not Supported 00:17:22.949 Device Self-Test: Not Supported 00:17:22.949 Directives: Not Supported 00:17:22.949 NVMe-MI: Not Supported 00:17:22.949 Virtualization Management: Not Supported 00:17:22.949 Doorbell Buffer Config: Not Supported 00:17:22.949 Get LBA Status Capability: Not Supported 00:17:22.949 Command & Feature Lockdown Capability: Not Supported 00:17:22.949 Abort Command Limit: 4 00:17:22.949 Async Event Request Limit: 4 00:17:22.949 Number of Firmware Slots: N/A 00:17:22.949 Firmware Slot 1 Read-Only: N/A 00:17:22.949 Firmware Activation Without Reset: N/A 00:17:22.949 Multiple Update Detection Support: N/A 00:17:22.949 Firmware Update Granularity: No Information Provided 00:17:22.949 Per-Namespace SMART Log: No 00:17:22.949 Asymmetric Namespace Access Log Page: Not Supported 00:17:22.949 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:22.949 Command Effects Log Page: Supported 00:17:22.949 Get Log Page Extended Data: Supported 00:17:22.949 Telemetry Log Pages: Not Supported 00:17:22.949 Persistent Event Log Pages: Not Supported 00:17:22.949 Supported Log Pages Log Page: May Support 00:17:22.949 Commands Supported & Effects Log Page: Not Supported 00:17:22.949 Feature Identifiers & Effects Log Page:May Support 00:17:22.949 NVMe-MI Commands & Effects Log Page: May Support 00:17:22.949 Data Area 4 for Telemetry Log: Not Supported 00:17:22.949 Error Log Page Entries Supported: 128 00:17:22.949 Keep Alive: Supported 00:17:22.949 Keep Alive Granularity: 10000 ms 00:17:22.949 00:17:22.949 NVM Command Set Attributes 00:17:22.949 ========================== 00:17:22.949 Submission Queue Entry Size 00:17:22.949 Max: 64 00:17:22.949 Min: 64 00:17:22.949 Completion Queue Entry Size 00:17:22.949 Max: 16 00:17:22.949 Min: 16 00:17:22.949 Number of Namespaces: 32 00:17:22.949 Compare Command: Supported 00:17:22.949 Write Uncorrectable Command: Not Supported 00:17:22.949 Dataset Management Command: Supported 00:17:22.949 Write Zeroes Command: Supported 00:17:22.949 Set Features Save Field: Not Supported 00:17:22.949 Reservations: Not Supported 00:17:22.949 Timestamp: Not Supported 00:17:22.949 Copy: Supported 00:17:22.949 Volatile Write Cache: Present 00:17:22.949 Atomic Write Unit (Normal): 1 00:17:22.949 Atomic Write Unit (PFail): 1 00:17:22.949 Atomic Compare & Write Unit: 1 00:17:22.949 Fused Compare & Write: Supported 00:17:22.949 Scatter-Gather List 00:17:22.949 SGL Command Set: Supported (Dword aligned) 00:17:22.949 SGL Keyed: Not Supported 00:17:22.949 SGL Bit Bucket Descriptor: Not Supported 00:17:22.949 SGL Metadata Pointer: Not Supported 00:17:22.949 Oversized SGL: Not Supported 00:17:22.949 SGL Metadata Address: Not Supported 00:17:22.949 SGL Offset: Not Supported 00:17:22.949 Transport SGL Data Block: Not Supported 00:17:22.949 Replay Protected Memory Block: Not Supported 00:17:22.949 00:17:22.949 Firmware Slot Information 00:17:22.949 ========================= 00:17:22.949 Active slot: 1 00:17:22.949 Slot 1 Firmware Revision: 24.01.1 00:17:22.949 00:17:22.949 00:17:22.949 Commands Supported and Effects 00:17:22.949 ============================== 00:17:22.949 Admin Commands 00:17:22.949 -------------- 00:17:22.949 Get Log Page (02h): Supported 00:17:22.949 Identify (06h): Supported 00:17:22.949 Abort (08h): Supported 00:17:22.949 Set Features (09h): Supported 00:17:22.949 Get Features (0Ah): Supported 00:17:22.949 Asynchronous Event Request (0Ch): Supported 00:17:22.949 Keep Alive (18h): Supported 00:17:22.949 I/O Commands 00:17:22.949 ------------ 00:17:22.949 Flush (00h): Supported LBA-Change 00:17:22.949 Write (01h): Supported LBA-Change 00:17:22.949 Read (02h): Supported 00:17:22.949 Compare (05h): Supported 00:17:22.949 Write Zeroes (08h): Supported LBA-Change 00:17:22.949 Dataset Management (09h): Supported LBA-Change 00:17:22.949 Copy (19h): Supported LBA-Change 00:17:22.949 Unknown (79h): Supported LBA-Change 00:17:22.949 Unknown (7Ah): Supported 00:17:22.949 00:17:22.949 Error Log 00:17:22.949 ========= 00:17:22.949 00:17:22.949 Arbitration 00:17:22.949 =========== 00:17:22.949 Arbitration Burst: 1 00:17:22.949 00:17:22.949 Power Management 00:17:22.949 ================ 00:17:22.949 Number of Power States: 1 00:17:22.949 Current Power State: Power State #0 00:17:22.949 Power State #0: 00:17:22.949 Max Power: 0.00 W 00:17:22.949 Non-Operational State: Operational 00:17:22.949 Entry Latency: Not Reported 00:17:22.949 Exit Latency: Not Reported 00:17:22.949 Relative Read Throughput: 0 00:17:22.949 Relative Read Latency: 0 00:17:22.949 Relative Write Throughput: 0 00:17:22.949 Relative Write Latency: 0 00:17:22.949 Idle Power: Not Reported 00:17:22.949 Active Power: Not Reported 00:17:22.949 Non-Operational Permissive Mode: Not Supported 00:17:22.949 00:17:22.949 Health Information 00:17:22.949 ================== 00:17:22.949 Critical Warnings: 00:17:22.949 Available Spare Space: OK 00:17:22.949 Temperature: OK 00:17:22.949 Device Reliability: OK 00:17:22.949 Read Only: No 00:17:22.949 Volatile Memory Backup: OK 00:17:22.949 Current Temperature: 0 Kelvin[2024-07-11 23:30:43.804348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:22.949 [2024-07-11 23:30:43.812148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:22.949 [2024-07-11 23:30:43.812195] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:22.949 [2024-07-11 23:30:43.812214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.949 [2024-07-11 23:30:43.812225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.949 [2024-07-11 23:30:43.812234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.949 [2024-07-11 23:30:43.812244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.949 [2024-07-11 23:30:43.812306] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:22.949 [2024-07-11 23:30:43.812327] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:22.949 [2024-07-11 23:30:43.813347] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:22.949 [2024-07-11 23:30:43.813363] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:22.949 [2024-07-11 23:30:43.814315] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:22.949 [2024-07-11 23:30:43.814340] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:22.949 [2024-07-11 23:30:43.814394] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:22.949 [2024-07-11 23:30:43.817151] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:22.950 (-273 Celsius) 00:17:22.950 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:22.950 Available Spare: 0% 00:17:22.950 Available Spare Threshold: 0% 00:17:22.950 Life Percentage Used: 0% 00:17:22.950 Data Units Read: 0 00:17:22.950 Data Units Written: 0 00:17:22.950 Host Read Commands: 0 00:17:22.950 Host Write Commands: 0 00:17:22.950 Controller Busy Time: 0 minutes 00:17:22.950 Power Cycles: 0 00:17:22.950 Power On Hours: 0 hours 00:17:22.950 Unsafe Shutdowns: 0 00:17:22.950 Unrecoverable Media Errors: 0 00:17:22.950 Lifetime Error Log Entries: 0 00:17:22.950 Warning Temperature Time: 0 minutes 00:17:22.950 Critical Temperature Time: 0 minutes 00:17:22.950 00:17:22.950 Number of Queues 00:17:22.950 ================ 00:17:22.950 Number of I/O Submission Queues: 127 00:17:22.950 Number of I/O Completion Queues: 127 00:17:22.950 00:17:22.950 Active Namespaces 00:17:22.950 ================= 00:17:22.950 Namespace ID:1 00:17:22.950 Error Recovery Timeout: Unlimited 00:17:22.950 Command Set Identifier: NVM (00h) 00:17:22.950 Deallocate: Supported 00:17:22.950 Deallocated/Unwritten Error: Not Supported 00:17:22.950 Deallocated Read Value: Unknown 00:17:22.950 Deallocate in Write Zeroes: Not Supported 00:17:22.950 Deallocated Guard Field: 0xFFFF 00:17:22.950 Flush: Supported 00:17:22.950 Reservation: Supported 00:17:22.950 Namespace Sharing Capabilities: Multiple Controllers 00:17:22.950 Size (in LBAs): 131072 (0GiB) 00:17:22.950 Capacity (in LBAs): 131072 (0GiB) 00:17:22.950 Utilization (in LBAs): 131072 (0GiB) 00:17:22.950 NGUID: 442A6CD576404F9D8C1A742DBA0F9386 00:17:22.950 UUID: 442a6cd5-7640-4f9d-8c1a-742dba0f9386 00:17:22.950 Thin Provisioning: Not Supported 00:17:22.950 Per-NS Atomic Units: Yes 00:17:22.950 Atomic Boundary Size (Normal): 0 00:17:22.950 Atomic Boundary Size (PFail): 0 00:17:22.950 Atomic Boundary Offset: 0 00:17:22.950 Maximum Single Source Range Length: 65535 00:17:22.950 Maximum Copy Length: 65535 00:17:22.950 Maximum Source Range Count: 1 00:17:22.950 NGUID/EUI64 Never Reused: No 00:17:22.950 Namespace Write Protected: No 00:17:22.950 Number of LBA Formats: 1 00:17:22.950 Current LBA Format: LBA Format #00 00:17:22.950 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:22.950 00:17:22.950 23:30:43 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:23.207 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.470 Initializing NVMe Controllers 00:17:28.470 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:28.470 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:28.470 Initialization complete. Launching workers. 00:17:28.470 ======================================================== 00:17:28.470 Latency(us) 00:17:28.470 Device Information : IOPS MiB/s Average min max 00:17:28.470 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 37857.37 147.88 3380.28 1133.94 6680.93 00:17:28.470 ======================================================== 00:17:28.470 Total : 37857.37 147.88 3380.28 1133.94 6680.93 00:17:28.470 00:17:28.470 23:30:49 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:28.470 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.730 Initializing NVMe Controllers 00:17:33.730 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:33.730 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:33.730 Initialization complete. Launching workers. 00:17:33.730 ======================================================== 00:17:33.730 Latency(us) 00:17:33.730 Device Information : IOPS MiB/s Average min max 00:17:33.730 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36322.29 141.88 3523.42 1147.05 7646.50 00:17:33.730 ======================================================== 00:17:33.730 Total : 36322.29 141.88 3523.42 1147.05 7646.50 00:17:33.730 00:17:33.730 23:30:54 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:33.730 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.992 Initializing NVMe Controllers 00:17:38.992 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:38.992 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:38.992 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:38.992 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:38.992 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:38.992 Initialization complete. Launching workers. 00:17:38.992 Starting thread on core 2 00:17:38.992 Starting thread on core 3 00:17:38.992 Starting thread on core 1 00:17:38.992 23:30:59 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:39.268 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.600 Initializing NVMe Controllers 00:17:42.600 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:42.600 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:42.600 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:42.600 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:42.600 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:42.600 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:42.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:42.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:42.601 Initialization complete. Launching workers. 00:17:42.601 Starting thread on core 1 with urgent priority queue 00:17:42.601 Starting thread on core 2 with urgent priority queue 00:17:42.601 Starting thread on core 3 with urgent priority queue 00:17:42.601 Starting thread on core 0 with urgent priority queue 00:17:42.601 SPDK bdev Controller (SPDK2 ) core 0: 518.33 IO/s 192.93 secs/100000 ios 00:17:42.601 SPDK bdev Controller (SPDK2 ) core 1: 486.00 IO/s 205.76 secs/100000 ios 00:17:42.601 SPDK bdev Controller (SPDK2 ) core 2: 511.33 IO/s 195.57 secs/100000 ios 00:17:42.601 SPDK bdev Controller (SPDK2 ) core 3: 397.00 IO/s 251.89 secs/100000 ios 00:17:42.601 ======================================================== 00:17:42.601 00:17:42.601 23:31:03 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:42.857 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.114 Initializing NVMe Controllers 00:17:43.114 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:43.114 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:43.114 Namespace ID: 1 size: 0GB 00:17:43.114 Initialization complete. 00:17:43.114 INFO: using host memory buffer for IO 00:17:43.114 Hello world! 00:17:43.114 23:31:03 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:43.114 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.486 Initializing NVMe Controllers 00:17:44.486 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:44.486 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:44.486 Initialization complete. Launching workers. 00:17:44.486 submit (in ns) avg, min, max = 6815.2, 3507.8, 4016963.3 00:17:44.486 complete (in ns) avg, min, max = 25585.2, 2028.9, 4017360.0 00:17:44.486 00:17:44.486 Submit histogram 00:17:44.486 ================ 00:17:44.486 Range in us Cumulative Count 00:17:44.486 3.484 - 3.508: 0.0073% ( 1) 00:17:44.486 3.508 - 3.532: 0.6802% ( 92) 00:17:44.486 3.532 - 3.556: 3.5986% ( 399) 00:17:44.486 3.556 - 3.579: 11.1249% ( 1029) 00:17:44.486 3.579 - 3.603: 25.0293% ( 1901) 00:17:44.486 3.603 - 3.627: 36.5199% ( 1571) 00:17:44.486 3.627 - 3.650: 47.5132% ( 1503) 00:17:44.486 3.650 - 3.674: 54.0960% ( 900) 00:17:44.486 3.674 - 3.698: 60.7226% ( 906) 00:17:44.486 3.698 - 3.721: 65.1989% ( 612) 00:17:44.486 3.721 - 3.745: 68.7683% ( 488) 00:17:44.486 3.745 - 3.769: 71.1893% ( 331) 00:17:44.486 3.769 - 3.793: 73.2226% ( 278) 00:17:44.486 3.793 - 3.816: 76.2142% ( 409) 00:17:44.486 3.816 - 3.840: 80.5295% ( 590) 00:17:44.486 3.840 - 3.864: 84.7499% ( 577) 00:17:44.486 3.864 - 3.887: 87.1928% ( 334) 00:17:44.486 3.887 - 3.911: 89.0579% ( 255) 00:17:44.486 3.911 - 3.935: 91.0547% ( 273) 00:17:44.486 3.935 - 3.959: 92.4737% ( 194) 00:17:44.486 3.959 - 3.982: 93.2929% ( 112) 00:17:44.486 3.982 - 4.006: 93.9658% ( 92) 00:17:44.486 4.006 - 4.030: 94.4851% ( 71) 00:17:44.486 4.030 - 4.053: 94.9678% ( 66) 00:17:44.486 4.053 - 4.077: 95.5603% ( 81) 00:17:44.486 4.077 - 4.101: 95.9699% ( 56) 00:17:44.486 4.101 - 4.124: 96.3721% ( 55) 00:17:44.486 4.124 - 4.148: 96.5843% ( 29) 00:17:44.486 4.148 - 4.172: 96.7159% ( 18) 00:17:44.486 4.172 - 4.196: 96.8037% ( 12) 00:17:44.486 4.196 - 4.219: 96.8841% ( 11) 00:17:44.486 4.219 - 4.243: 96.9865% ( 14) 00:17:44.486 4.243 - 4.267: 97.0377% ( 7) 00:17:44.486 4.267 - 4.290: 97.1475% ( 15) 00:17:44.486 4.290 - 4.314: 97.1987% ( 7) 00:17:44.486 4.314 - 4.338: 97.2425% ( 6) 00:17:44.486 4.338 - 4.361: 97.2791% ( 5) 00:17:44.486 4.361 - 4.385: 97.2864% ( 1) 00:17:44.486 4.385 - 4.409: 97.3230% ( 5) 00:17:44.486 4.409 - 4.433: 97.3376% ( 2) 00:17:44.486 4.433 - 4.456: 97.3596% ( 3) 00:17:44.486 4.456 - 4.480: 97.3669% ( 1) 00:17:44.486 4.480 - 4.504: 97.3815% ( 2) 00:17:44.486 4.527 - 4.551: 97.3961% ( 2) 00:17:44.486 4.551 - 4.575: 97.4035% ( 1) 00:17:44.486 4.575 - 4.599: 97.4108% ( 1) 00:17:44.486 4.693 - 4.717: 97.4181% ( 1) 00:17:44.486 4.717 - 4.741: 97.4327% ( 2) 00:17:44.486 4.764 - 4.788: 97.4547% ( 3) 00:17:44.486 4.788 - 4.812: 97.4912% ( 5) 00:17:44.486 4.812 - 4.836: 97.5497% ( 8) 00:17:44.486 4.836 - 4.859: 97.6083% ( 8) 00:17:44.486 4.859 - 4.883: 97.6668% ( 8) 00:17:44.486 4.883 - 4.907: 97.7106% ( 6) 00:17:44.486 4.907 - 4.930: 97.7838% ( 10) 00:17:44.486 4.930 - 4.954: 97.8716% ( 12) 00:17:44.486 4.954 - 4.978: 97.9154% ( 6) 00:17:44.486 4.978 - 5.001: 97.9520% ( 5) 00:17:44.486 5.001 - 5.025: 97.9959% ( 6) 00:17:44.486 5.025 - 5.049: 98.0252% ( 4) 00:17:44.486 5.049 - 5.073: 98.0617% ( 5) 00:17:44.486 5.073 - 5.096: 98.0983% ( 5) 00:17:44.486 5.096 - 5.120: 98.1202% ( 3) 00:17:44.486 5.120 - 5.144: 98.1276% ( 1) 00:17:44.486 5.144 - 5.167: 98.1495% ( 3) 00:17:44.486 5.167 - 5.191: 98.1788% ( 4) 00:17:44.486 5.191 - 5.215: 98.1861% ( 1) 00:17:44.486 5.215 - 5.239: 98.1934% ( 1) 00:17:44.486 5.262 - 5.286: 98.2153% ( 3) 00:17:44.486 5.286 - 5.310: 98.2226% ( 1) 00:17:44.486 5.310 - 5.333: 98.2373% ( 2) 00:17:44.486 5.333 - 5.357: 98.2446% ( 1) 00:17:44.486 5.404 - 5.428: 98.2592% ( 2) 00:17:44.486 5.452 - 5.476: 98.2665% ( 1) 00:17:44.486 5.570 - 5.594: 98.2738% ( 1) 00:17:44.486 5.594 - 5.618: 98.2812% ( 1) 00:17:44.486 5.618 - 5.641: 98.2885% ( 1) 00:17:44.486 5.760 - 5.784: 98.2958% ( 1) 00:17:44.486 5.807 - 5.831: 98.3031% ( 1) 00:17:44.486 5.831 - 5.855: 98.3177% ( 2) 00:17:44.486 5.855 - 5.879: 98.3250% ( 1) 00:17:44.486 5.902 - 5.926: 98.3397% ( 2) 00:17:44.486 5.926 - 5.950: 98.3470% ( 1) 00:17:44.486 6.068 - 6.116: 98.3616% ( 2) 00:17:44.486 6.116 - 6.163: 98.3689% ( 1) 00:17:44.486 6.210 - 6.258: 98.3762% ( 1) 00:17:44.486 6.305 - 6.353: 98.3909% ( 2) 00:17:44.486 6.353 - 6.400: 98.3982% ( 1) 00:17:44.486 6.495 - 6.542: 98.4055% ( 1) 00:17:44.486 6.590 - 6.637: 98.4128% ( 1) 00:17:44.486 6.637 - 6.684: 98.4201% ( 1) 00:17:44.486 6.969 - 7.016: 98.4421% ( 3) 00:17:44.486 7.301 - 7.348: 98.4494% ( 1) 00:17:44.486 7.348 - 7.396: 98.4567% ( 1) 00:17:44.486 7.490 - 7.538: 98.4640% ( 1) 00:17:44.486 7.870 - 7.917: 98.4713% ( 1) 00:17:44.486 7.964 - 8.012: 98.4786% ( 1) 00:17:44.486 8.059 - 8.107: 98.4860% ( 1) 00:17:44.486 8.154 - 8.201: 98.5006% ( 2) 00:17:44.486 8.296 - 8.344: 98.5152% ( 2) 00:17:44.486 8.391 - 8.439: 98.5225% ( 1) 00:17:44.486 8.439 - 8.486: 98.5372% ( 2) 00:17:44.486 8.486 - 8.533: 98.5445% ( 1) 00:17:44.486 8.628 - 8.676: 98.5518% ( 1) 00:17:44.486 8.676 - 8.723: 98.5664% ( 2) 00:17:44.486 8.818 - 8.865: 98.5737% ( 1) 00:17:44.486 8.865 - 8.913: 98.5810% ( 1) 00:17:44.486 8.960 - 9.007: 98.5884% ( 1) 00:17:44.486 9.007 - 9.055: 98.5957% ( 1) 00:17:44.486 9.055 - 9.102: 98.6103% ( 2) 00:17:44.486 9.102 - 9.150: 98.6322% ( 3) 00:17:44.486 9.150 - 9.197: 98.6396% ( 1) 00:17:44.486 9.197 - 9.244: 98.6469% ( 1) 00:17:44.486 9.244 - 9.292: 98.6615% ( 2) 00:17:44.486 9.292 - 9.339: 98.6688% ( 1) 00:17:44.486 9.339 - 9.387: 98.6761% ( 1) 00:17:44.486 9.434 - 9.481: 98.6908% ( 2) 00:17:44.486 9.481 - 9.529: 98.6981% ( 1) 00:17:44.486 9.529 - 9.576: 98.7054% ( 1) 00:17:44.486 9.576 - 9.624: 98.7127% ( 1) 00:17:44.486 9.719 - 9.766: 98.7200% ( 1) 00:17:44.486 9.766 - 9.813: 98.7346% ( 2) 00:17:44.486 9.813 - 9.861: 98.7420% ( 1) 00:17:44.486 9.861 - 9.908: 98.7639% ( 3) 00:17:44.486 9.908 - 9.956: 98.7785% ( 2) 00:17:44.486 9.956 - 10.003: 98.7858% ( 1) 00:17:44.486 10.003 - 10.050: 98.8005% ( 2) 00:17:44.486 10.240 - 10.287: 98.8224% ( 3) 00:17:44.486 10.335 - 10.382: 98.8297% ( 1) 00:17:44.486 10.430 - 10.477: 98.8370% ( 1) 00:17:44.486 10.524 - 10.572: 98.8444% ( 1) 00:17:44.486 10.572 - 10.619: 98.8590% ( 2) 00:17:44.486 10.714 - 10.761: 98.8663% ( 1) 00:17:44.486 10.809 - 10.856: 98.8809% ( 2) 00:17:44.486 10.856 - 10.904: 98.8882% ( 1) 00:17:44.486 10.951 - 10.999: 98.8956% ( 1) 00:17:44.486 10.999 - 11.046: 98.9029% ( 1) 00:17:44.486 11.520 - 11.567: 98.9175% ( 2) 00:17:44.487 11.567 - 11.615: 98.9248% ( 1) 00:17:44.487 11.947 - 11.994: 98.9321% ( 1) 00:17:44.487 12.136 - 12.231: 98.9394% ( 1) 00:17:44.487 12.326 - 12.421: 98.9468% ( 1) 00:17:44.487 12.421 - 12.516: 98.9541% ( 1) 00:17:44.487 12.705 - 12.800: 98.9614% ( 1) 00:17:44.487 12.895 - 12.990: 98.9687% ( 1) 00:17:44.487 12.990 - 13.084: 98.9760% ( 1) 00:17:44.487 13.748 - 13.843: 98.9833% ( 1) 00:17:44.487 13.843 - 13.938: 98.9906% ( 1) 00:17:44.487 13.938 - 14.033: 98.9980% ( 1) 00:17:44.487 14.127 - 14.222: 99.0053% ( 1) 00:17:44.487 14.412 - 14.507: 99.0126% ( 1) 00:17:44.487 14.507 - 14.601: 99.0418% ( 4) 00:17:44.487 14.886 - 14.981: 99.0492% ( 1) 00:17:44.487 14.981 - 15.076: 99.0565% ( 1) 00:17:44.487 15.170 - 15.265: 99.0638% ( 1) 00:17:44.487 15.550 - 15.644: 99.0784% ( 2) 00:17:44.487 16.972 - 17.067: 99.0930% ( 2) 00:17:44.487 17.256 - 17.351: 99.1223% ( 4) 00:17:44.487 17.351 - 17.446: 99.1516% ( 4) 00:17:44.487 17.446 - 17.541: 99.1881% ( 5) 00:17:44.487 17.541 - 17.636: 99.2539% ( 9) 00:17:44.487 17.636 - 17.730: 99.3198% ( 9) 00:17:44.487 17.730 - 17.825: 99.3417% ( 3) 00:17:44.487 17.825 - 17.920: 99.4149% ( 10) 00:17:44.487 17.920 - 18.015: 99.4368% ( 3) 00:17:44.487 18.015 - 18.110: 99.4807% ( 6) 00:17:44.487 18.110 - 18.204: 99.5685% ( 12) 00:17:44.487 18.204 - 18.299: 99.6197% ( 7) 00:17:44.487 18.299 - 18.394: 99.6782% ( 8) 00:17:44.487 18.394 - 18.489: 99.7513% ( 10) 00:17:44.487 18.489 - 18.584: 99.7806% ( 4) 00:17:44.487 18.584 - 18.679: 99.8098% ( 4) 00:17:44.487 18.679 - 18.773: 99.8391% ( 4) 00:17:44.487 18.773 - 18.868: 99.8464% ( 1) 00:17:44.487 18.868 - 18.963: 99.8610% ( 2) 00:17:44.487 19.153 - 19.247: 99.8683% ( 1) 00:17:44.487 22.661 - 22.756: 99.8757% ( 1) 00:17:44.487 22.756 - 22.850: 99.8830% ( 1) 00:17:44.487 23.135 - 23.230: 99.8903% ( 1) 00:17:44.487 23.799 - 23.893: 99.8976% ( 1) 00:17:44.487 25.221 - 25.410: 99.9049% ( 1) 00:17:44.487 27.876 - 28.065: 99.9122% ( 1) 00:17:44.487 28.444 - 28.634: 99.9195% ( 1) 00:17:44.487 29.772 - 29.961: 99.9269% ( 1) 00:17:44.487 3980.705 - 4004.978: 99.9854% ( 8) 00:17:44.487 4004.978 - 4029.250: 100.0000% ( 2) 00:17:44.487 00:17:44.487 Complete histogram 00:17:44.487 ================== 00:17:44.487 Range in us Cumulative Count 00:17:44.487 2.027 - 2.039: 1.4190% ( 194) 00:17:44.487 2.039 - 2.050: 16.6106% ( 2077) 00:17:44.487 2.050 - 2.062: 20.0410% ( 469) 00:17:44.487 2.062 - 2.074: 32.4678% ( 1699) 00:17:44.487 2.074 - 2.086: 59.6401% ( 3715) 00:17:44.487 2.086 - 2.098: 64.0506% ( 603) 00:17:44.487 2.098 - 2.110: 67.0056% ( 404) 00:17:44.487 2.110 - 2.121: 71.6501% ( 635) 00:17:44.487 2.121 - 2.133: 72.4473% ( 109) 00:17:44.487 2.133 - 2.145: 81.1001% ( 1183) 00:17:44.487 2.145 - 2.157: 89.1164% ( 1096) 00:17:44.487 2.157 - 2.169: 90.4842% ( 187) 00:17:44.487 2.169 - 2.181: 91.9909% ( 206) 00:17:44.487 2.181 - 2.193: 92.7150% ( 99) 00:17:44.487 2.193 - 2.204: 93.3294% ( 84) 00:17:44.487 2.204 - 2.216: 94.7118% ( 189) 00:17:44.487 2.216 - 2.228: 95.4652% ( 103) 00:17:44.487 2.228 - 2.240: 95.6700% ( 28) 00:17:44.487 2.240 - 2.252: 95.8821% ( 29) 00:17:44.487 2.252 - 2.264: 96.0138% ( 18) 00:17:44.487 2.264 - 2.276: 96.1015% ( 12) 00:17:44.487 2.276 - 2.287: 96.2551% ( 21) 00:17:44.487 2.287 - 2.299: 96.3136% ( 8) 00:17:44.487 2.299 - 2.311: 96.3575% ( 6) 00:17:44.487 2.311 - 2.323: 96.3795% ( 3) 00:17:44.487 2.323 - 2.335: 96.4819% ( 14) 00:17:44.487 2.335 - 2.347: 96.5769% ( 13) 00:17:44.487 2.347 - 2.359: 96.7744% ( 27) 00:17:44.487 2.359 - 2.370: 96.9865% ( 29) 00:17:44.487 2.370 - 2.382: 97.2133% ( 31) 00:17:44.487 2.382 - 2.394: 97.4473% ( 32) 00:17:44.487 2.394 - 2.406: 97.6521% ( 28) 00:17:44.487 2.406 - 2.418: 97.8057% ( 21) 00:17:44.487 2.418 - 2.430: 98.0032% ( 27) 00:17:44.487 2.430 - 2.441: 98.1422% ( 19) 00:17:44.487 2.441 - 2.453: 98.2738% ( 18) 00:17:44.487 2.453 - 2.465: 98.3543% ( 11) 00:17:44.487 2.465 - 2.477: 98.3762% ( 3) 00:17:44.487 2.477 - 2.489: 98.4201% ( 6) 00:17:44.487 2.489 - 2.501: 98.4348% ( 2) 00:17:44.487 2.501 - 2.513: 98.4421% ( 1) 00:17:44.487 2.513 - 2.524: 98.4494% ( 1) 00:17:44.487 2.524 - 2.536: 98.4567% ( 1) 00:17:44.487 2.536 - 2.548: 98.4713% ( 2) 00:17:44.487 2.560 - 2.572: 98.4933% ( 3) 00:17:44.487 2.572 - 2.584: 98.5006% ( 1) 00:17:44.487 2.596 - 2.607: 98.5079% ( 1) 00:17:44.487 2.643 - 2.655: 98.5152% ( 1) 00:17:44.487 2.702 - 2.714: 98.5225% ( 1) 00:17:44.487 2.773 - 2.785: 98.5298% ( 1) 00:17:44.487 3.342 - 3.366: 98.5518% ( 3) 00:17:44.487 3.390 - 3.413: 98.5591% ( 1) 00:17:44.487 3.413 - 3.437: 98.5810% ( 3) 00:17:44.487 3.437 - 3.461: 98.5884% ( 1) 00:17:44.487 3.461 - 3.484: 98.5957% ( 1) 00:17:44.487 3.484 - 3.508: 98.6103% ( 2) 00:17:44.487 3.508 - 3.532: 98.6176% ( 1) 00:17:44.487 3.532 - 3.556: 98.6249% ( 1) 00:17:44.487 3.556 - 3.579: 98.6396% ( 2) 00:17:44.487 3.579 - 3.603: 98.6542% ( 2) 00:17:44.487 3.603 - 3.627: 98.6688% ( 2) 00:17:44.487 3.627 - 3.650: 98.6761% ( 1) 00:17:44.487 3.674 - 3.698: 98.6908% ( 2) 00:17:44.487 3.698 - 3.721: 98.6981% ( 1) 00:17:44.487 3.721 - 3.745: 98.7054% ( 1) 00:17:44.487 3.745 - 3.769: 98.7127% ( 1) 00:17:44.487 3.793 - 3.816: 98.7273% ( 2) 00:17:44.487 3.816 - 3.840: 98.7346% ( 1) 00:17:44.487 3.911 - 3.935: 98.7420% ( 1) 00:17:44.487 3.935 - 3.959: 98.7493% ( 1) 00:17:44.487 4.053 - 4.077: 98.7566% ( 1) 00:17:44.487 6.305 - 6.353: 98.7639% ( 1) 00:17:44.487 6.637 - 6.684: 98.7712% ( 1) 00:17:44.487 6.827 - 6.874: 98.7785% ( 1) 00:17:44.487 6.874 - 6.921: 98.7858% ( 1) 00:17:44.487 6.969 - 7.016: 98.7932% ( 1) 00:17:44.487 7.064 - 7.111: 98.8005% ( 1) 00:17:44.487 7.159 - 7.206: 98.8078% ( 1) 00:17:44.487 7.206 - 7.253: 98.8151% ( 1) 00:17:44.487 7.253 - 7.301: 98.8224% ( 1) 00:17:44.487 7.348 - 7.396: 98.8297% ( 1) 00:17:44.487 7.443 - 7.490: 98.8370% ( 1) 00:17:44.487 7.538 - 7.585: 98.8444% ( 1) 00:17:44.487 7.585 - 7.633: 98.8517% ( 1) 00:17:44.487 7.633 - 7.680: 98.8590% ( 1) 00:17:44.487 7.680 - 7.727: 98.8882% ( 4) 00:17:44.487 7.775 - 7.822: 98.8956% ( 1) 00:17:44.487 7.822 - 7.870: 98.9102% ( 2) 00:17:44.487 7.917 - 7.964: 98.9175% ( 1) 00:17:44.487 8.012 - 8.059: 98.9248% ( 1) 00:17:44.487 8.201 - 8.249: 98.9321% ( 1) 00:17:44.487 8.344 - 8.391: 98.9394% ( 1) 00:17:44.487 8.439 - 8.486: 98.9468% ( 1) 00:17:44.487 8.533 - 8.581: 98.9541% ( 1) 00:17:44.487 8.676 - 8.723: 98.9614% ( 1) 00:17:44.487 8.770 - 8.818: 98.9687% ( 1) 00:17:44.487 9.719 - 9.766: 98.9760% ( 1) 00:17:44.487 12.421 - 12.516: 98.9833% ( 1) 00:17:44.487 14.033 - 14.127: 98.9906% ( 1) 00:17:44.487 15.455 - 15.550: 98.9980% ( 1) 00:17:44.487 15.644 - 15.739: 99.0053% ( 1) 00:17:44.487 15.739 - 15.834: 99.0126% ( 1) 00:17:44.487 15.834 - 15.929: 99.0199% ( 1) 00:17:44.487 15.929 - 16.024: 99.0492% ( 4) 00:17:44.487 16.024 - 16.119: 99.0784% ( 4) 00:17:44.487 16.119 - 16.213: 99.1223% ( 6) 00:17:44.487 16.213 - 16.308: 99.1662% ( 6) 00:17:44.487 16.308 - 16.403: 99.1808% ( 2) 00:17:44.487 16.403 - 16.498: 99.1881% ( 1) 00:17:44.487 16.498 - 16.593: 99.2174% ( 4) 00:17:44.487 16.593 - 16.687: 99.2613% ( 6) 00:17:44.487 16.687 - 16.782: 99.2905% ( 4) 00:17:44.487 16.782 - 16.877: 99.3198% ( 4) 00:17:44.487 16.877 - 16.972: 99.3417% ( 3) 00:17:44.487 16.972 - 17.067: 99.3490% ( 1) 00:17:44.487 17.067 - 17.161: 99.3637% ( 2) 00:17:44.487 17.161 - 17.256: 99.3856% ( 3) 00:17:44.487 17.256 - 17.351: 99.3929% ( 1) 00:17:44.487 17.351 - 17.446: 99.4002% ( 1) 00:17:44.487 17.446 - 17.541: 99.4075% ( 1) 00:17:44.487 18.489 - 18.584: 99.4149% ( 1) 00:17:44.487 3883.615 - 3907.887: 99.4222% ( 1) 00:17:44.487 3980.705 - 4004.978: 99.8976% ( 65) 00:17:44.487 4004.978 - 4029.250: 100.0000% ( 14) 00:17:44.487 00:17:44.487 23:31:05 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:44.487 23:31:05 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:44.487 23:31:05 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:44.487 23:31:05 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:44.487 23:31:05 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:44.745 [ 00:17:44.745 { 00:17:44.745 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:44.745 "subtype": "Discovery", 00:17:44.745 "listen_addresses": [], 00:17:44.745 "allow_any_host": true, 00:17:44.745 "hosts": [] 00:17:44.745 }, 00:17:44.745 { 00:17:44.745 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:44.745 "subtype": "NVMe", 00:17:44.745 "listen_addresses": [ 00:17:44.745 { 00:17:44.745 "transport": "VFIOUSER", 00:17:44.745 "trtype": "VFIOUSER", 00:17:44.745 "adrfam": "IPv4", 00:17:44.745 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:44.745 "trsvcid": "0" 00:17:44.745 } 00:17:44.745 ], 00:17:44.745 "allow_any_host": true, 00:17:44.745 "hosts": [], 00:17:44.745 "serial_number": "SPDK1", 00:17:44.745 "model_number": "SPDK bdev Controller", 00:17:44.745 "max_namespaces": 32, 00:17:44.745 "min_cntlid": 1, 00:17:44.745 "max_cntlid": 65519, 00:17:44.745 "namespaces": [ 00:17:44.745 { 00:17:44.745 "nsid": 1, 00:17:44.745 "bdev_name": "Malloc1", 00:17:44.745 "name": "Malloc1", 00:17:44.745 "nguid": "A4C9D06C14E94646A23AC73A9B53A64D", 00:17:44.745 "uuid": "a4c9d06c-14e9-4646-a23a-c73a9b53a64d" 00:17:44.745 }, 00:17:44.745 { 00:17:44.745 "nsid": 2, 00:17:44.745 "bdev_name": "Malloc3", 00:17:44.745 "name": "Malloc3", 00:17:44.745 "nguid": "7A80122512CB44C18513A71D9BCC0021", 00:17:44.745 "uuid": "7a801225-12cb-44c1-8513-a71d9bcc0021" 00:17:44.745 } 00:17:44.745 ] 00:17:44.745 }, 00:17:44.745 { 00:17:44.745 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:44.745 "subtype": "NVMe", 00:17:44.745 "listen_addresses": [ 00:17:44.745 { 00:17:44.745 "transport": "VFIOUSER", 00:17:44.745 "trtype": "VFIOUSER", 00:17:44.745 "adrfam": "IPv4", 00:17:44.745 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:44.745 "trsvcid": "0" 00:17:44.745 } 00:17:44.745 ], 00:17:44.745 "allow_any_host": true, 00:17:44.745 "hosts": [], 00:17:44.745 "serial_number": "SPDK2", 00:17:44.746 "model_number": "SPDK bdev Controller", 00:17:44.746 "max_namespaces": 32, 00:17:44.746 "min_cntlid": 1, 00:17:44.746 "max_cntlid": 65519, 00:17:44.746 "namespaces": [ 00:17:44.746 { 00:17:44.746 "nsid": 1, 00:17:44.746 "bdev_name": "Malloc2", 00:17:44.746 "name": "Malloc2", 00:17:44.746 "nguid": "442A6CD576404F9D8C1A742DBA0F9386", 00:17:44.746 "uuid": "442a6cd5-7640-4f9d-8c1a-742dba0f9386" 00:17:44.746 } 00:17:44.746 ] 00:17:44.746 } 00:17:44.746 ] 00:17:44.746 23:31:05 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:44.746 23:31:05 -- target/nvmf_vfio_user.sh@34 -- # aerpid=234002 00:17:44.746 23:31:05 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:44.746 23:31:05 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:44.746 23:31:05 -- common/autotest_common.sh@1244 -- # local i=0 00:17:44.746 23:31:05 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:44.746 23:31:05 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:44.746 23:31:05 -- common/autotest_common.sh@1255 -- # return 0 00:17:44.746 23:31:05 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:44.746 23:31:05 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:44.746 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.004 Malloc4 00:17:45.004 23:31:05 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:45.569 23:31:06 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:45.569 Asynchronous Event Request test 00:17:45.569 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:45.569 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:45.569 Registering asynchronous event callbacks... 00:17:45.569 Starting namespace attribute notice tests for all controllers... 00:17:45.569 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:45.569 aer_cb - Changed Namespace 00:17:45.569 Cleaning up... 00:17:45.827 [ 00:17:45.827 { 00:17:45.827 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:45.827 "subtype": "Discovery", 00:17:45.827 "listen_addresses": [], 00:17:45.827 "allow_any_host": true, 00:17:45.827 "hosts": [] 00:17:45.827 }, 00:17:45.827 { 00:17:45.827 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:45.827 "subtype": "NVMe", 00:17:45.827 "listen_addresses": [ 00:17:45.827 { 00:17:45.827 "transport": "VFIOUSER", 00:17:45.827 "trtype": "VFIOUSER", 00:17:45.827 "adrfam": "IPv4", 00:17:45.827 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:45.827 "trsvcid": "0" 00:17:45.827 } 00:17:45.827 ], 00:17:45.827 "allow_any_host": true, 00:17:45.827 "hosts": [], 00:17:45.827 "serial_number": "SPDK1", 00:17:45.827 "model_number": "SPDK bdev Controller", 00:17:45.827 "max_namespaces": 32, 00:17:45.827 "min_cntlid": 1, 00:17:45.827 "max_cntlid": 65519, 00:17:45.827 "namespaces": [ 00:17:45.827 { 00:17:45.827 "nsid": 1, 00:17:45.827 "bdev_name": "Malloc1", 00:17:45.827 "name": "Malloc1", 00:17:45.827 "nguid": "A4C9D06C14E94646A23AC73A9B53A64D", 00:17:45.827 "uuid": "a4c9d06c-14e9-4646-a23a-c73a9b53a64d" 00:17:45.827 }, 00:17:45.827 { 00:17:45.827 "nsid": 2, 00:17:45.827 "bdev_name": "Malloc3", 00:17:45.827 "name": "Malloc3", 00:17:45.827 "nguid": "7A80122512CB44C18513A71D9BCC0021", 00:17:45.827 "uuid": "7a801225-12cb-44c1-8513-a71d9bcc0021" 00:17:45.827 } 00:17:45.827 ] 00:17:45.827 }, 00:17:45.827 { 00:17:45.827 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:45.827 "subtype": "NVMe", 00:17:45.827 "listen_addresses": [ 00:17:45.827 { 00:17:45.827 "transport": "VFIOUSER", 00:17:45.827 "trtype": "VFIOUSER", 00:17:45.827 "adrfam": "IPv4", 00:17:45.827 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:45.827 "trsvcid": "0" 00:17:45.827 } 00:17:45.827 ], 00:17:45.827 "allow_any_host": true, 00:17:45.827 "hosts": [], 00:17:45.827 "serial_number": "SPDK2", 00:17:45.827 "model_number": "SPDK bdev Controller", 00:17:45.827 "max_namespaces": 32, 00:17:45.827 "min_cntlid": 1, 00:17:45.827 "max_cntlid": 65519, 00:17:45.827 "namespaces": [ 00:17:45.827 { 00:17:45.827 "nsid": 1, 00:17:45.827 "bdev_name": "Malloc2", 00:17:45.827 "name": "Malloc2", 00:17:45.827 "nguid": "442A6CD576404F9D8C1A742DBA0F9386", 00:17:45.827 "uuid": "442a6cd5-7640-4f9d-8c1a-742dba0f9386" 00:17:45.827 }, 00:17:45.827 { 00:17:45.827 "nsid": 2, 00:17:45.827 "bdev_name": "Malloc4", 00:17:45.827 "name": "Malloc4", 00:17:45.827 "nguid": "8B11D569836D462F804089892BB9B786", 00:17:45.827 "uuid": "8b11d569-836d-462f-8040-89892bb9b786" 00:17:45.827 } 00:17:45.827 ] 00:17:45.827 } 00:17:45.827 ] 00:17:45.827 23:31:06 -- target/nvmf_vfio_user.sh@44 -- # wait 234002 00:17:45.827 23:31:06 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:45.827 23:31:06 -- target/nvmf_vfio_user.sh@95 -- # killprocess 228023 00:17:45.827 23:31:06 -- common/autotest_common.sh@926 -- # '[' -z 228023 ']' 00:17:45.827 23:31:06 -- common/autotest_common.sh@930 -- # kill -0 228023 00:17:45.827 23:31:06 -- common/autotest_common.sh@931 -- # uname 00:17:45.827 23:31:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:45.827 23:31:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 228023 00:17:45.827 23:31:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:45.827 23:31:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:45.827 23:31:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 228023' 00:17:45.827 killing process with pid 228023 00:17:45.827 23:31:06 -- common/autotest_common.sh@945 -- # kill 228023 00:17:45.827 [2024-07-11 23:31:06.721256] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:45.827 23:31:06 -- common/autotest_common.sh@950 -- # wait 228023 00:17:46.393 23:31:07 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:46.393 23:31:07 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:46.393 23:31:07 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:46.393 23:31:07 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:46.393 23:31:07 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:46.393 23:31:07 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=234156 00:17:46.393 23:31:07 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:46.393 23:31:07 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 234156' 00:17:46.393 Process pid: 234156 00:17:46.393 23:31:07 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:46.393 23:31:07 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 234156 00:17:46.393 23:31:07 -- common/autotest_common.sh@819 -- # '[' -z 234156 ']' 00:17:46.393 23:31:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.393 23:31:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:46.393 23:31:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.393 23:31:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:46.393 23:31:07 -- common/autotest_common.sh@10 -- # set +x 00:17:46.393 [2024-07-11 23:31:07.124970] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:46.393 [2024-07-11 23:31:07.126268] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:46.393 [2024-07-11 23:31:07.126333] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.393 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.393 [2024-07-11 23:31:07.195418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.393 [2024-07-11 23:31:07.282713] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:46.393 [2024-07-11 23:31:07.282881] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.393 [2024-07-11 23:31:07.282901] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.393 [2024-07-11 23:31:07.282916] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.393 [2024-07-11 23:31:07.282999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.393 [2024-07-11 23:31:07.283057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.393 [2024-07-11 23:31:07.283174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.393 [2024-07-11 23:31:07.283177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.651 [2024-07-11 23:31:07.382533] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:17:46.651 [2024-07-11 23:31:07.382807] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:17:46.651 [2024-07-11 23:31:07.383061] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:17:46.651 [2024-07-11 23:31:07.383865] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:46.651 [2024-07-11 23:31:07.383984] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:17:47.584 23:31:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:47.584 23:31:08 -- common/autotest_common.sh@852 -- # return 0 00:17:47.584 23:31:08 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:48.515 23:31:09 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:48.774 23:31:09 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:48.774 23:31:09 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:48.774 23:31:09 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:48.774 23:31:09 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:48.774 23:31:09 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:49.033 Malloc1 00:17:49.033 23:31:09 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:49.293 23:31:10 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:49.860 23:31:10 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:50.117 23:31:10 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:50.117 23:31:10 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:50.117 23:31:10 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:50.374 Malloc2 00:17:50.375 23:31:11 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:50.939 23:31:11 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:51.505 23:31:12 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:52.071 23:31:12 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:52.071 23:31:12 -- target/nvmf_vfio_user.sh@95 -- # killprocess 234156 00:17:52.071 23:31:12 -- common/autotest_common.sh@926 -- # '[' -z 234156 ']' 00:17:52.071 23:31:12 -- common/autotest_common.sh@930 -- # kill -0 234156 00:17:52.071 23:31:12 -- common/autotest_common.sh@931 -- # uname 00:17:52.071 23:31:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:52.071 23:31:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 234156 00:17:52.071 23:31:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:52.071 23:31:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:52.071 23:31:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 234156' 00:17:52.071 killing process with pid 234156 00:17:52.071 23:31:12 -- common/autotest_common.sh@945 -- # kill 234156 00:17:52.071 23:31:12 -- common/autotest_common.sh@950 -- # wait 234156 00:17:52.331 23:31:13 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:52.331 23:31:13 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:52.331 00:17:52.331 real 0m58.294s 00:17:52.331 user 3m52.173s 00:17:52.331 sys 0m6.086s 00:17:52.331 23:31:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:52.331 23:31:13 -- common/autotest_common.sh@10 -- # set +x 00:17:52.331 ************************************ 00:17:52.331 END TEST nvmf_vfio_user 00:17:52.331 ************************************ 00:17:52.331 23:31:13 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:52.331 23:31:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:52.331 23:31:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:52.331 23:31:13 -- common/autotest_common.sh@10 -- # set +x 00:17:52.331 ************************************ 00:17:52.331 START TEST nvmf_vfio_user_nvme_compliance 00:17:52.331 ************************************ 00:17:52.331 23:31:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:52.331 * Looking for test storage... 00:17:52.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:52.331 23:31:13 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.331 23:31:13 -- nvmf/common.sh@7 -- # uname -s 00:17:52.331 23:31:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.331 23:31:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.331 23:31:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.331 23:31:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.331 23:31:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.331 23:31:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.331 23:31:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.331 23:31:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.331 23:31:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.331 23:31:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.331 23:31:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:52.331 23:31:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:52.331 23:31:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.331 23:31:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.331 23:31:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.331 23:31:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.331 23:31:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.331 23:31:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.331 23:31:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.331 23:31:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.331 23:31:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.331 23:31:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.331 23:31:13 -- paths/export.sh@5 -- # export PATH 00:17:52.331 23:31:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.331 23:31:13 -- nvmf/common.sh@46 -- # : 0 00:17:52.331 23:31:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:52.331 23:31:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:52.331 23:31:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:52.331 23:31:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.331 23:31:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.331 23:31:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:52.331 23:31:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:52.331 23:31:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:52.331 23:31:13 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.331 23:31:13 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.331 23:31:13 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:52.331 23:31:13 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:52.331 23:31:13 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:52.331 23:31:13 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:52.331 23:31:13 -- compliance/compliance.sh@20 -- # nvmfpid=234920 00:17:52.331 23:31:13 -- compliance/compliance.sh@21 -- # echo 'Process pid: 234920' 00:17:52.331 Process pid: 234920 00:17:52.331 23:31:13 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:52.331 23:31:13 -- compliance/compliance.sh@24 -- # waitforlisten 234920 00:17:52.331 23:31:13 -- common/autotest_common.sh@819 -- # '[' -z 234920 ']' 00:17:52.331 23:31:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.331 23:31:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:52.591 23:31:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.591 23:31:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:52.591 23:31:13 -- common/autotest_common.sh@10 -- # set +x 00:17:52.591 [2024-07-11 23:31:13.325887] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:52.591 [2024-07-11 23:31:13.325989] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.591 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.591 [2024-07-11 23:31:13.400654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:52.591 [2024-07-11 23:31:13.494083] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:52.591 [2024-07-11 23:31:13.494258] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.591 [2024-07-11 23:31:13.494280] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.591 [2024-07-11 23:31:13.494295] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.591 [2024-07-11 23:31:13.494355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.591 [2024-07-11 23:31:13.494430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.591 [2024-07-11 23:31:13.494433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.525 23:31:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:53.525 23:31:14 -- common/autotest_common.sh@852 -- # return 0 00:17:53.525 23:31:14 -- compliance/compliance.sh@26 -- # sleep 1 00:17:54.458 23:31:15 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:54.458 23:31:15 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:54.458 23:31:15 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:54.458 23:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:54.458 23:31:15 -- common/autotest_common.sh@10 -- # set +x 00:17:54.458 23:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:54.458 23:31:15 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:54.458 23:31:15 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:54.458 23:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:54.458 23:31:15 -- common/autotest_common.sh@10 -- # set +x 00:17:54.717 malloc0 00:17:54.717 23:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:54.717 23:31:15 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:54.717 23:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:54.717 23:31:15 -- common/autotest_common.sh@10 -- # set +x 00:17:54.717 23:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:54.717 23:31:15 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:54.717 23:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:54.717 23:31:15 -- common/autotest_common.sh@10 -- # set +x 00:17:54.717 23:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:54.717 23:31:15 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:54.717 23:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:54.717 23:31:15 -- common/autotest_common.sh@10 -- # set +x 00:17:54.717 23:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:54.717 23:31:15 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:54.717 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.717 00:17:54.717 00:17:54.717 CUnit - A unit testing framework for C - Version 2.1-3 00:17:54.717 http://cunit.sourceforge.net/ 00:17:54.717 00:17:54.717 00:17:54.717 Suite: nvme_compliance 00:17:54.717 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-11 23:31:15.655145] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:54.717 [2024-07-11 23:31:15.655226] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:54.717 [2024-07-11 23:31:15.655255] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:54.975 passed 00:17:54.975 Test: admin_identify_ctrlr_verify_fused ...passed 00:17:54.975 Test: admin_identify_ns ...[2024-07-11 23:31:15.889158] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:54.975 [2024-07-11 23:31:15.897160] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:55.232 passed 00:17:55.232 Test: admin_get_features_mandatory_features ...passed 00:17:55.232 Test: admin_get_features_optional_features ...passed 00:17:55.490 Test: admin_set_features_number_of_queues ...passed 00:17:55.490 Test: admin_get_log_page_mandatory_logs ...passed 00:17:55.749 Test: admin_get_log_page_with_lpo ...[2024-07-11 23:31:16.518163] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:55.749 passed 00:17:55.749 Test: fabric_property_get ...passed 00:17:56.007 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-11 23:31:16.703832] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:56.007 passed 00:17:56.007 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-11 23:31:16.871150] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:56.007 [2024-07-11 23:31:16.887151] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:56.007 passed 00:17:56.265 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-11 23:31:16.977072] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:56.265 passed 00:17:56.265 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-11 23:31:17.137166] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:56.265 [2024-07-11 23:31:17.161149] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:56.265 passed 00:17:56.523 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-11 23:31:17.252445] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:56.523 [2024-07-11 23:31:17.252504] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:56.523 passed 00:17:56.523 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-11 23:31:17.428149] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:56.523 [2024-07-11 23:31:17.436149] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:56.523 [2024-07-11 23:31:17.444166] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:56.523 [2024-07-11 23:31:17.452149] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:56.780 passed 00:17:56.780 Test: admin_create_io_sq_verify_pc ...[2024-07-11 23:31:17.581165] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:56.780 passed 00:17:58.153 Test: admin_create_io_qp_max_qps ...[2024-07-11 23:31:18.783244] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:58.411 passed 00:17:58.689 Test: admin_create_io_sq_shared_cq ...[2024-07-11 23:31:19.396165] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:58.689 passed 00:17:58.689 00:17:58.689 Run Summary: Type Total Ran Passed Failed Inactive 00:17:58.689 suites 1 1 n/a 0 0 00:17:58.689 tests 18 18 18 0 0 00:17:58.689 asserts 360 360 360 0 n/a 00:17:58.689 00:17:58.689 Elapsed time = 1.567 seconds 00:17:58.689 23:31:19 -- compliance/compliance.sh@42 -- # killprocess 234920 00:17:58.689 23:31:19 -- common/autotest_common.sh@926 -- # '[' -z 234920 ']' 00:17:58.689 23:31:19 -- common/autotest_common.sh@930 -- # kill -0 234920 00:17:58.689 23:31:19 -- common/autotest_common.sh@931 -- # uname 00:17:58.689 23:31:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:58.689 23:31:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 234920 00:17:58.689 23:31:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:58.689 23:31:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:58.689 23:31:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 234920' 00:17:58.689 killing process with pid 234920 00:17:58.689 23:31:19 -- common/autotest_common.sh@945 -- # kill 234920 00:17:58.689 23:31:19 -- common/autotest_common.sh@950 -- # wait 234920 00:17:58.958 23:31:19 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:58.958 23:31:19 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:58.958 00:17:58.958 real 0m6.619s 00:17:58.958 user 0m18.919s 00:17:58.958 sys 0m0.697s 00:17:58.958 23:31:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.958 23:31:19 -- common/autotest_common.sh@10 -- # set +x 00:17:58.958 ************************************ 00:17:58.958 END TEST nvmf_vfio_user_nvme_compliance 00:17:58.958 ************************************ 00:17:58.958 23:31:19 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:58.958 23:31:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:58.958 23:31:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:58.958 23:31:19 -- common/autotest_common.sh@10 -- # set +x 00:17:58.958 ************************************ 00:17:58.958 START TEST nvmf_vfio_user_fuzz 00:17:58.958 ************************************ 00:17:58.958 23:31:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:58.958 * Looking for test storage... 00:17:58.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.958 23:31:19 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.958 23:31:19 -- nvmf/common.sh@7 -- # uname -s 00:17:58.958 23:31:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.958 23:31:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.958 23:31:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.958 23:31:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.958 23:31:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.958 23:31:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.958 23:31:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.958 23:31:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.958 23:31:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.958 23:31:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.958 23:31:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:58.958 23:31:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:58.958 23:31:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.958 23:31:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.958 23:31:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.958 23:31:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.959 23:31:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.959 23:31:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.959 23:31:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.959 23:31:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.959 23:31:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.959 23:31:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.959 23:31:19 -- paths/export.sh@5 -- # export PATH 00:17:58.959 23:31:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.959 23:31:19 -- nvmf/common.sh@46 -- # : 0 00:17:58.959 23:31:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:58.959 23:31:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:58.959 23:31:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:58.959 23:31:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.959 23:31:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.959 23:31:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:58.959 23:31:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:58.959 23:31:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:58.959 23:31:19 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:58.959 23:31:19 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:58.959 23:31:19 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:58.959 23:31:19 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:58.959 23:31:19 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:58.959 23:31:19 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:58.959 23:31:19 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:59.218 23:31:19 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=235780 00:17:59.218 23:31:19 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:59.218 23:31:19 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 235780' 00:17:59.218 Process pid: 235780 00:17:59.218 23:31:19 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:59.218 23:31:19 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 235780 00:17:59.218 23:31:19 -- common/autotest_common.sh@819 -- # '[' -z 235780 ']' 00:17:59.218 23:31:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.218 23:31:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:59.218 23:31:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.218 23:31:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:59.218 23:31:19 -- common/autotest_common.sh@10 -- # set +x 00:17:59.476 23:31:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:59.476 23:31:20 -- common/autotest_common.sh@852 -- # return 0 00:17:59.476 23:31:20 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:00.851 23:31:21 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:00.851 23:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.851 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:18:00.851 23:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.851 23:31:21 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:00.851 23:31:21 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:00.851 23:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.851 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:18:00.851 malloc0 00:18:00.852 23:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.852 23:31:21 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:00.852 23:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.852 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:18:00.852 23:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.852 23:31:21 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:00.852 23:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.852 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:18:00.852 23:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.852 23:31:21 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:00.852 23:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.852 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:18:00.852 23:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.852 23:31:21 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:00.852 23:31:21 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:32.919 Fuzzing completed. Shutting down the fuzz application 00:18:32.919 00:18:32.919 Dumping successful admin opcodes: 00:18:32.919 8, 9, 10, 24, 00:18:32.919 Dumping successful io opcodes: 00:18:32.919 0, 00:18:32.919 NS: 0x200003a1ef00 I/O qp, Total commands completed: 590739, total successful commands: 2279, random_seed: 1897560320 00:18:32.919 NS: 0x200003a1ef00 admin qp, Total commands completed: 147466, total successful commands: 1192, random_seed: 1606645504 00:18:32.919 23:31:51 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:32.919 23:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:32.919 23:31:51 -- common/autotest_common.sh@10 -- # set +x 00:18:32.919 23:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:32.919 23:31:51 -- target/vfio_user_fuzz.sh@46 -- # killprocess 235780 00:18:32.919 23:31:51 -- common/autotest_common.sh@926 -- # '[' -z 235780 ']' 00:18:32.919 23:31:51 -- common/autotest_common.sh@930 -- # kill -0 235780 00:18:32.919 23:31:51 -- common/autotest_common.sh@931 -- # uname 00:18:32.919 23:31:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:32.919 23:31:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 235780 00:18:32.919 23:31:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:32.919 23:31:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:32.919 23:31:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 235780' 00:18:32.919 killing process with pid 235780 00:18:32.919 23:31:51 -- common/autotest_common.sh@945 -- # kill 235780 00:18:32.919 23:31:51 -- common/autotest_common.sh@950 -- # wait 235780 00:18:32.919 23:31:52 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:32.919 23:31:52 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:32.919 00:18:32.919 real 0m32.485s 00:18:32.919 user 0m35.654s 00:18:32.919 sys 0m24.752s 00:18:32.919 23:31:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:32.919 23:31:52 -- common/autotest_common.sh@10 -- # set +x 00:18:32.919 ************************************ 00:18:32.919 END TEST nvmf_vfio_user_fuzz 00:18:32.919 ************************************ 00:18:32.919 23:31:52 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:32.919 23:31:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:32.919 23:31:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:32.919 23:31:52 -- common/autotest_common.sh@10 -- # set +x 00:18:32.919 ************************************ 00:18:32.919 START TEST nvmf_host_management 00:18:32.919 ************************************ 00:18:32.919 23:31:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:32.919 * Looking for test storage... 00:18:32.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:32.919 23:31:52 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.919 23:31:52 -- nvmf/common.sh@7 -- # uname -s 00:18:32.919 23:31:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.919 23:31:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.919 23:31:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.919 23:31:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.919 23:31:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.919 23:31:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.919 23:31:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.919 23:31:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.919 23:31:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.919 23:31:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.919 23:31:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:32.919 23:31:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:32.920 23:31:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.920 23:31:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.920 23:31:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.920 23:31:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:32.920 23:31:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.920 23:31:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.920 23:31:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.920 23:31:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.920 23:31:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.920 23:31:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.920 23:31:52 -- paths/export.sh@5 -- # export PATH 00:18:32.920 23:31:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.920 23:31:52 -- nvmf/common.sh@46 -- # : 0 00:18:32.920 23:31:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:32.920 23:31:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:32.920 23:31:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:32.920 23:31:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.920 23:31:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.920 23:31:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:32.920 23:31:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:32.920 23:31:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:32.920 23:31:52 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:32.920 23:31:52 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:32.920 23:31:52 -- target/host_management.sh@104 -- # nvmftestinit 00:18:32.920 23:31:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:32.920 23:31:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.920 23:31:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:32.920 23:31:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:32.920 23:31:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:32.920 23:31:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.920 23:31:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.920 23:31:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.920 23:31:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:32.920 23:31:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:32.920 23:31:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:32.920 23:31:52 -- common/autotest_common.sh@10 -- # set +x 00:18:34.297 23:31:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:34.297 23:31:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:34.297 23:31:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:34.297 23:31:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:34.297 23:31:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:34.297 23:31:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:34.297 23:31:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:34.297 23:31:55 -- nvmf/common.sh@294 -- # net_devs=() 00:18:34.297 23:31:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:34.297 23:31:55 -- nvmf/common.sh@295 -- # e810=() 00:18:34.297 23:31:55 -- nvmf/common.sh@295 -- # local -ga e810 00:18:34.297 23:31:55 -- nvmf/common.sh@296 -- # x722=() 00:18:34.297 23:31:55 -- nvmf/common.sh@296 -- # local -ga x722 00:18:34.297 23:31:55 -- nvmf/common.sh@297 -- # mlx=() 00:18:34.297 23:31:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:34.297 23:31:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:34.297 23:31:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:34.297 23:31:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:34.297 23:31:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:34.297 23:31:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:34.297 23:31:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:34.297 23:31:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:34.297 23:31:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:34.297 23:31:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:34.297 23:31:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:34.297 23:31:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:34.297 23:31:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:34.297 23:31:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:34.297 23:31:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:34.297 23:31:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:34.297 23:31:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:34.297 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:34.297 23:31:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:34.297 23:31:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:34.297 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:34.297 23:31:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:34.297 23:31:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:34.297 23:31:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.297 23:31:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:34.297 23:31:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.297 23:31:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:34.297 Found net devices under 0000:84:00.0: cvl_0_0 00:18:34.297 23:31:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.297 23:31:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:34.297 23:31:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.297 23:31:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:34.297 23:31:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.297 23:31:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:34.297 Found net devices under 0000:84:00.1: cvl_0_1 00:18:34.297 23:31:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.297 23:31:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:34.297 23:31:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:34.297 23:31:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:34.297 23:31:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.297 23:31:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:34.297 23:31:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:34.297 23:31:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:34.297 23:31:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:34.297 23:31:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:34.297 23:31:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:34.297 23:31:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:34.297 23:31:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.297 23:31:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:34.297 23:31:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:34.297 23:31:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:34.297 23:31:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:34.297 23:31:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:34.297 23:31:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:34.297 23:31:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:34.297 23:31:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:34.297 23:31:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:34.297 23:31:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:34.297 23:31:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:34.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:18:34.297 00:18:34.297 --- 10.0.0.2 ping statistics --- 00:18:34.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.297 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:18:34.297 23:31:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:34.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:18:34.297 00:18:34.297 --- 10.0.0.1 ping statistics --- 00:18:34.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.297 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:18:34.297 23:31:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.297 23:31:55 -- nvmf/common.sh@410 -- # return 0 00:18:34.297 23:31:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:34.297 23:31:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.297 23:31:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:34.297 23:31:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:34.298 23:31:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.298 23:31:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:34.298 23:31:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:34.298 23:31:55 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:18:34.298 23:31:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:34.298 23:31:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:34.298 23:31:55 -- common/autotest_common.sh@10 -- # set +x 00:18:34.298 ************************************ 00:18:34.298 START TEST nvmf_host_management 00:18:34.298 ************************************ 00:18:34.298 23:31:55 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:18:34.298 23:31:55 -- target/host_management.sh@69 -- # starttarget 00:18:34.298 23:31:55 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:18:34.298 23:31:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:34.298 23:31:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:34.298 23:31:55 -- common/autotest_common.sh@10 -- # set +x 00:18:34.555 23:31:55 -- nvmf/common.sh@469 -- # nvmfpid=241214 00:18:34.555 23:31:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:34.555 23:31:55 -- nvmf/common.sh@470 -- # waitforlisten 241214 00:18:34.555 23:31:55 -- common/autotest_common.sh@819 -- # '[' -z 241214 ']' 00:18:34.555 23:31:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.555 23:31:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:34.555 23:31:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.555 23:31:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:34.555 23:31:55 -- common/autotest_common.sh@10 -- # set +x 00:18:34.556 [2024-07-11 23:31:55.317974] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:34.556 [2024-07-11 23:31:55.318072] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.556 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.556 [2024-07-11 23:31:55.427736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:34.814 [2024-07-11 23:31:55.534883] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:34.814 [2024-07-11 23:31:55.535037] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.814 [2024-07-11 23:31:55.535074] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.814 [2024-07-11 23:31:55.535092] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.814 [2024-07-11 23:31:55.535197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.814 [2024-07-11 23:31:55.535255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:34.814 [2024-07-11 23:31:55.535309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:34.814 [2024-07-11 23:31:55.535312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.745 23:31:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:35.745 23:31:56 -- common/autotest_common.sh@852 -- # return 0 00:18:35.745 23:31:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:35.745 23:31:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:35.745 23:31:56 -- common/autotest_common.sh@10 -- # set +x 00:18:35.745 23:31:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.745 23:31:56 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:35.745 23:31:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.745 23:31:56 -- common/autotest_common.sh@10 -- # set +x 00:18:35.745 [2024-07-11 23:31:56.587543] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.745 23:31:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.745 23:31:56 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:18:35.745 23:31:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:35.745 23:31:56 -- common/autotest_common.sh@10 -- # set +x 00:18:35.745 23:31:56 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:35.745 23:31:56 -- target/host_management.sh@23 -- # cat 00:18:35.745 23:31:56 -- target/host_management.sh@30 -- # rpc_cmd 00:18:35.745 23:31:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.745 23:31:56 -- common/autotest_common.sh@10 -- # set +x 00:18:35.745 Malloc0 00:18:35.745 [2024-07-11 23:31:56.653109] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.745 23:31:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.745 23:31:56 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:18:35.745 23:31:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:35.745 23:31:56 -- common/autotest_common.sh@10 -- # set +x 00:18:35.745 23:31:56 -- target/host_management.sh@73 -- # perfpid=241519 00:18:35.745 23:31:56 -- target/host_management.sh@74 -- # waitforlisten 241519 /var/tmp/bdevperf.sock 00:18:35.745 23:31:56 -- common/autotest_common.sh@819 -- # '[' -z 241519 ']' 00:18:35.745 23:31:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.745 23:31:56 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:35.745 23:31:56 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:18:35.745 23:31:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:35.745 23:31:56 -- nvmf/common.sh@520 -- # config=() 00:18:35.745 23:31:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.745 23:31:56 -- nvmf/common.sh@520 -- # local subsystem config 00:18:35.745 23:31:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:35.745 23:31:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:35.745 23:31:56 -- common/autotest_common.sh@10 -- # set +x 00:18:36.003 23:31:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:36.003 { 00:18:36.003 "params": { 00:18:36.003 "name": "Nvme$subsystem", 00:18:36.003 "trtype": "$TEST_TRANSPORT", 00:18:36.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:36.003 "adrfam": "ipv4", 00:18:36.003 "trsvcid": "$NVMF_PORT", 00:18:36.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:36.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:36.003 "hdgst": ${hdgst:-false}, 00:18:36.003 "ddgst": ${ddgst:-false} 00:18:36.003 }, 00:18:36.003 "method": "bdev_nvme_attach_controller" 00:18:36.003 } 00:18:36.003 EOF 00:18:36.003 )") 00:18:36.003 23:31:56 -- nvmf/common.sh@542 -- # cat 00:18:36.003 23:31:56 -- nvmf/common.sh@544 -- # jq . 00:18:36.003 23:31:56 -- nvmf/common.sh@545 -- # IFS=, 00:18:36.003 23:31:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:36.003 "params": { 00:18:36.003 "name": "Nvme0", 00:18:36.003 "trtype": "tcp", 00:18:36.003 "traddr": "10.0.0.2", 00:18:36.003 "adrfam": "ipv4", 00:18:36.003 "trsvcid": "4420", 00:18:36.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:36.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:36.003 "hdgst": false, 00:18:36.003 "ddgst": false 00:18:36.003 }, 00:18:36.003 "method": "bdev_nvme_attach_controller" 00:18:36.003 }' 00:18:36.003 [2024-07-11 23:31:56.771371] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:36.003 [2024-07-11 23:31:56.771527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241519 ] 00:18:36.003 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.003 [2024-07-11 23:31:56.877769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.261 [2024-07-11 23:31:56.967933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.261 Running I/O for 10 seconds... 00:18:37.195 23:31:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:37.195 23:31:57 -- common/autotest_common.sh@852 -- # return 0 00:18:37.195 23:31:57 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:37.195 23:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.195 23:31:57 -- common/autotest_common.sh@10 -- # set +x 00:18:37.195 23:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.195 23:31:57 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:37.195 23:31:57 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:18:37.195 23:31:57 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:37.195 23:31:57 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:18:37.195 23:31:57 -- target/host_management.sh@52 -- # local ret=1 00:18:37.195 23:31:57 -- target/host_management.sh@53 -- # local i 00:18:37.195 23:31:57 -- target/host_management.sh@54 -- # (( i = 10 )) 00:18:37.195 23:31:57 -- target/host_management.sh@54 -- # (( i != 0 )) 00:18:37.195 23:31:57 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:18:37.195 23:31:57 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:18:37.195 23:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.195 23:31:57 -- common/autotest_common.sh@10 -- # set +x 00:18:37.195 23:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.195 23:31:58 -- target/host_management.sh@55 -- # read_io_count=2077 00:18:37.195 23:31:58 -- target/host_management.sh@58 -- # '[' 2077 -ge 100 ']' 00:18:37.195 23:31:58 -- target/host_management.sh@59 -- # ret=0 00:18:37.195 23:31:58 -- target/host_management.sh@60 -- # break 00:18:37.195 23:31:58 -- target/host_management.sh@64 -- # return 0 00:18:37.195 23:31:58 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:37.195 23:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.195 23:31:58 -- common/autotest_common.sh@10 -- # set +x 00:18:37.195 [2024-07-11 23:31:58.022369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.195 [2024-07-11 23:31:58.022429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.195 [2024-07-11 23:31:58.022444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.195 [2024-07-11 23:31:58.022457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.195 [2024-07-11 23:31:58.022469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.195 [2024-07-11 23:31:58.022483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.195 [2024-07-11 23:31:58.022495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.022507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.022518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.022531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.022543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.022568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.022581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.022594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.022605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.022618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.022630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.022641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.022653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.022665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.022676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e130 is same with the state(5) to be set 00:18:37.196 [2024-07-11 23:31:58.023155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.023976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.023991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.024007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.024021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.024037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.024052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.024068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.024082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.024098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.024113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.024129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.024150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.024168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.024183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.024199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.196 [2024-07-11 23:31:58.024214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.196 [2024-07-11 23:31:58.024238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.024978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.024993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.025012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.025028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.025044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.025058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.025075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.025089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.025105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.025120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.025136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.025157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.025174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.197 [2024-07-11 23:31:58.025189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.197 [2024-07-11 23:31:58.025204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991430 is same with the state(5) to be set 00:18:37.197 [2024-07-11 23:31:58.025293] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1991430 was disconnected and freed. reset controller. 00:18:37.197 23:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.197 [2024-07-11 23:31:58.026430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:37.197 23:31:58 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:37.197 23:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.197 23:31:58 -- common/autotest_common.sh@10 -- # set +x 00:18:37.197 task offset: 20096 on job bdev=Nvme0n1 fails 00:18:37.197 00:18:37.197 Latency(us) 00:18:37.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.197 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:37.197 Job: Nvme0n1 ended in about 0.84 seconds with error 00:18:37.197 Verification LBA range: start 0x0 length 0x400 00:18:37.197 Nvme0n1 : 0.84 2617.36 163.59 76.59 0.00 23502.08 3203.98 28738.75 00:18:37.197 =================================================================================================================== 00:18:37.197 Total : 2617.36 163.59 76.59 0.00 23502.08 3203.98 28738.75 00:18:37.197 [2024-07-11 23:31:58.028349] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:37.197 [2024-07-11 23:31:58.028380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1993860 (9): Bad file descriptor 00:18:37.197 23:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.197 23:31:58 -- target/host_management.sh@87 -- # sleep 1 00:18:37.197 [2024-07-11 23:31:58.041037] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:38.131 23:31:59 -- target/host_management.sh@91 -- # kill -9 241519 00:18:38.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (241519) - No such process 00:18:38.131 23:31:59 -- target/host_management.sh@91 -- # true 00:18:38.131 23:31:59 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:18:38.131 23:31:59 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:38.131 23:31:59 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:18:38.131 23:31:59 -- nvmf/common.sh@520 -- # config=() 00:18:38.131 23:31:59 -- nvmf/common.sh@520 -- # local subsystem config 00:18:38.131 23:31:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:38.131 23:31:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:38.131 { 00:18:38.131 "params": { 00:18:38.131 "name": "Nvme$subsystem", 00:18:38.131 "trtype": "$TEST_TRANSPORT", 00:18:38.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.131 "adrfam": "ipv4", 00:18:38.131 "trsvcid": "$NVMF_PORT", 00:18:38.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.131 "hdgst": ${hdgst:-false}, 00:18:38.131 "ddgst": ${ddgst:-false} 00:18:38.131 }, 00:18:38.131 "method": "bdev_nvme_attach_controller" 00:18:38.131 } 00:18:38.131 EOF 00:18:38.131 )") 00:18:38.131 23:31:59 -- nvmf/common.sh@542 -- # cat 00:18:38.131 23:31:59 -- nvmf/common.sh@544 -- # jq . 00:18:38.131 23:31:59 -- nvmf/common.sh@545 -- # IFS=, 00:18:38.131 23:31:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:38.131 "params": { 00:18:38.131 "name": "Nvme0", 00:18:38.131 "trtype": "tcp", 00:18:38.131 "traddr": "10.0.0.2", 00:18:38.131 "adrfam": "ipv4", 00:18:38.131 "trsvcid": "4420", 00:18:38.131 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:38.131 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:38.131 "hdgst": false, 00:18:38.131 "ddgst": false 00:18:38.131 }, 00:18:38.131 "method": "bdev_nvme_attach_controller" 00:18:38.131 }' 00:18:38.389 [2024-07-11 23:31:59.089588] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:38.389 [2024-07-11 23:31:59.089697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241806 ] 00:18:38.389 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.389 [2024-07-11 23:31:59.163375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.389 [2024-07-11 23:31:59.250645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.647 Running I/O for 1 seconds... 00:18:39.581 00:18:39.581 Latency(us) 00:18:39.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.581 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:39.581 Verification LBA range: start 0x0 length 0x400 00:18:39.581 Nvme0n1 : 1.01 2458.18 153.64 0.00 0.00 25713.46 1341.06 41554.68 00:18:39.581 =================================================================================================================== 00:18:39.581 Total : 2458.18 153.64 0.00 0.00 25713.46 1341.06 41554.68 00:18:39.839 23:32:00 -- target/host_management.sh@101 -- # stoptarget 00:18:39.839 23:32:00 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:18:39.839 23:32:00 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:39.839 23:32:00 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:39.839 23:32:00 -- target/host_management.sh@40 -- # nvmftestfini 00:18:39.839 23:32:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:39.839 23:32:00 -- nvmf/common.sh@116 -- # sync 00:18:39.839 23:32:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:39.839 23:32:00 -- nvmf/common.sh@119 -- # set +e 00:18:39.839 23:32:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:39.839 23:32:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:39.839 rmmod nvme_tcp 00:18:39.839 rmmod nvme_fabrics 00:18:39.839 rmmod nvme_keyring 00:18:40.097 23:32:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:40.097 23:32:00 -- nvmf/common.sh@123 -- # set -e 00:18:40.097 23:32:00 -- nvmf/common.sh@124 -- # return 0 00:18:40.097 23:32:00 -- nvmf/common.sh@477 -- # '[' -n 241214 ']' 00:18:40.097 23:32:00 -- nvmf/common.sh@478 -- # killprocess 241214 00:18:40.097 23:32:00 -- common/autotest_common.sh@926 -- # '[' -z 241214 ']' 00:18:40.097 23:32:00 -- common/autotest_common.sh@930 -- # kill -0 241214 00:18:40.097 23:32:00 -- common/autotest_common.sh@931 -- # uname 00:18:40.097 23:32:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:40.097 23:32:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 241214 00:18:40.097 23:32:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:40.097 23:32:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:40.097 23:32:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 241214' 00:18:40.097 killing process with pid 241214 00:18:40.097 23:32:00 -- common/autotest_common.sh@945 -- # kill 241214 00:18:40.097 23:32:00 -- common/autotest_common.sh@950 -- # wait 241214 00:18:40.356 [2024-07-11 23:32:01.077451] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:18:40.356 23:32:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:40.356 23:32:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:40.356 23:32:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:40.356 23:32:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.356 23:32:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:40.356 23:32:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.356 23:32:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.356 23:32:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.259 23:32:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:42.259 00:18:42.259 real 0m7.920s 00:18:42.259 user 0m25.368s 00:18:42.259 sys 0m1.775s 00:18:42.259 23:32:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.259 23:32:03 -- common/autotest_common.sh@10 -- # set +x 00:18:42.259 ************************************ 00:18:42.259 END TEST nvmf_host_management 00:18:42.259 ************************************ 00:18:42.259 23:32:03 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:42.259 00:18:42.259 real 0m10.835s 00:18:42.259 user 0m26.226s 00:18:42.259 sys 0m3.870s 00:18:42.259 23:32:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.259 23:32:03 -- common/autotest_common.sh@10 -- # set +x 00:18:42.259 ************************************ 00:18:42.259 END TEST nvmf_host_management 00:18:42.259 ************************************ 00:18:42.516 23:32:03 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:42.516 23:32:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:42.516 23:32:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:42.516 23:32:03 -- common/autotest_common.sh@10 -- # set +x 00:18:42.516 ************************************ 00:18:42.516 START TEST nvmf_lvol 00:18:42.516 ************************************ 00:18:42.516 23:32:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:42.516 * Looking for test storage... 00:18:42.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:42.516 23:32:03 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.516 23:32:03 -- nvmf/common.sh@7 -- # uname -s 00:18:42.516 23:32:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.516 23:32:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.516 23:32:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.516 23:32:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.516 23:32:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.516 23:32:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.516 23:32:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.516 23:32:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.516 23:32:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.516 23:32:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.516 23:32:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:42.516 23:32:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:42.516 23:32:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.516 23:32:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.516 23:32:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.516 23:32:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.516 23:32:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.516 23:32:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.516 23:32:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.516 23:32:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.516 23:32:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.516 23:32:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.516 23:32:03 -- paths/export.sh@5 -- # export PATH 00:18:42.516 23:32:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.516 23:32:03 -- nvmf/common.sh@46 -- # : 0 00:18:42.517 23:32:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:42.517 23:32:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:42.517 23:32:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:42.517 23:32:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.517 23:32:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.517 23:32:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:42.517 23:32:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:42.517 23:32:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:42.517 23:32:03 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:42.517 23:32:03 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:42.517 23:32:03 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:18:42.517 23:32:03 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:18:42.517 23:32:03 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.517 23:32:03 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:18:42.517 23:32:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:42.517 23:32:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.517 23:32:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:42.517 23:32:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:42.517 23:32:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:42.517 23:32:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.517 23:32:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.517 23:32:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.517 23:32:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:42.517 23:32:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:42.517 23:32:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:42.517 23:32:03 -- common/autotest_common.sh@10 -- # set +x 00:18:45.047 23:32:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:45.047 23:32:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:45.047 23:32:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:45.047 23:32:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:45.047 23:32:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:45.047 23:32:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:45.047 23:32:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:45.047 23:32:05 -- nvmf/common.sh@294 -- # net_devs=() 00:18:45.047 23:32:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:45.047 23:32:05 -- nvmf/common.sh@295 -- # e810=() 00:18:45.047 23:32:05 -- nvmf/common.sh@295 -- # local -ga e810 00:18:45.047 23:32:05 -- nvmf/common.sh@296 -- # x722=() 00:18:45.047 23:32:05 -- nvmf/common.sh@296 -- # local -ga x722 00:18:45.047 23:32:05 -- nvmf/common.sh@297 -- # mlx=() 00:18:45.047 23:32:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:45.047 23:32:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.047 23:32:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.047 23:32:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.047 23:32:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.047 23:32:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.047 23:32:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.047 23:32:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.047 23:32:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.047 23:32:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.047 23:32:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.047 23:32:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.047 23:32:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:45.047 23:32:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:45.047 23:32:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:45.047 23:32:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:45.047 23:32:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:45.047 23:32:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:45.047 23:32:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:45.313 23:32:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:45.313 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:45.313 23:32:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:45.313 23:32:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:45.313 23:32:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.313 23:32:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.313 23:32:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:45.313 23:32:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:45.313 23:32:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:45.313 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:45.313 23:32:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:45.313 23:32:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:45.313 23:32:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.313 23:32:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.313 23:32:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:45.313 23:32:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:45.313 23:32:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:45.313 23:32:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:45.313 23:32:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:45.313 23:32:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.313 23:32:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:45.313 23:32:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.313 23:32:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:45.313 Found net devices under 0000:84:00.0: cvl_0_0 00:18:45.313 23:32:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.313 23:32:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:45.313 23:32:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.313 23:32:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:45.313 23:32:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.313 23:32:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:45.313 Found net devices under 0000:84:00.1: cvl_0_1 00:18:45.313 23:32:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.313 23:32:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:45.313 23:32:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:45.313 23:32:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:45.313 23:32:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:45.313 23:32:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:45.313 23:32:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.313 23:32:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.313 23:32:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.313 23:32:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:45.313 23:32:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.313 23:32:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.313 23:32:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:45.313 23:32:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.313 23:32:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.313 23:32:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:45.313 23:32:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:45.313 23:32:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.313 23:32:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.313 23:32:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.313 23:32:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.313 23:32:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:45.313 23:32:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.313 23:32:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.313 23:32:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.313 23:32:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:45.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:18:45.313 00:18:45.313 --- 10.0.0.2 ping statistics --- 00:18:45.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.313 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:18:45.313 23:32:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:18:45.313 00:18:45.313 --- 10.0.0.1 ping statistics --- 00:18:45.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.313 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:18:45.313 23:32:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.313 23:32:06 -- nvmf/common.sh@410 -- # return 0 00:18:45.313 23:32:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:45.313 23:32:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.313 23:32:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:45.313 23:32:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:45.313 23:32:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.313 23:32:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:45.313 23:32:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:45.313 23:32:06 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:45.313 23:32:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:45.313 23:32:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:45.313 23:32:06 -- common/autotest_common.sh@10 -- # set +x 00:18:45.313 23:32:06 -- nvmf/common.sh@469 -- # nvmfpid=244054 00:18:45.313 23:32:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:45.313 23:32:06 -- nvmf/common.sh@470 -- # waitforlisten 244054 00:18:45.313 23:32:06 -- common/autotest_common.sh@819 -- # '[' -z 244054 ']' 00:18:45.313 23:32:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.313 23:32:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:45.313 23:32:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.313 23:32:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:45.313 23:32:06 -- common/autotest_common.sh@10 -- # set +x 00:18:45.576 [2024-07-11 23:32:06.272877] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:45.576 [2024-07-11 23:32:06.273047] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.576 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.576 [2024-07-11 23:32:06.383842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:45.576 [2024-07-11 23:32:06.479770] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:45.576 [2024-07-11 23:32:06.479943] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.576 [2024-07-11 23:32:06.479963] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.576 [2024-07-11 23:32:06.479978] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.576 [2024-07-11 23:32:06.480075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.576 [2024-07-11 23:32:06.480119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.576 [2024-07-11 23:32:06.480122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.508 23:32:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:46.508 23:32:07 -- common/autotest_common.sh@852 -- # return 0 00:18:46.508 23:32:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:46.508 23:32:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:46.508 23:32:07 -- common/autotest_common.sh@10 -- # set +x 00:18:46.508 23:32:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.508 23:32:07 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:47.070 [2024-07-11 23:32:07.911243] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.070 23:32:07 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:47.634 23:32:08 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:47.634 23:32:08 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:47.890 23:32:08 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:47.890 23:32:08 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:48.146 23:32:08 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:48.403 23:32:09 -- target/nvmf_lvol.sh@29 -- # lvs=65925977-9b87-4011-8d20-fc1e957becd4 00:18:48.403 23:32:09 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 65925977-9b87-4011-8d20-fc1e957becd4 lvol 20 00:18:48.660 23:32:09 -- target/nvmf_lvol.sh@32 -- # lvol=f90d34bf-48ba-4223-ba91-d1752491786f 00:18:48.660 23:32:09 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:48.917 23:32:09 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f90d34bf-48ba-4223-ba91-d1752491786f 00:18:49.482 23:32:10 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:49.482 [2024-07-11 23:32:10.431541] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.739 23:32:10 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:49.995 23:32:10 -- target/nvmf_lvol.sh@42 -- # perf_pid=244637 00:18:49.995 23:32:10 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:49.995 23:32:10 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:49.995 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.931 23:32:11 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f90d34bf-48ba-4223-ba91-d1752491786f MY_SNAPSHOT 00:18:51.497 23:32:12 -- target/nvmf_lvol.sh@47 -- # snapshot=93ceeff6-07d5-4f45-897e-4ea04a957b77 00:18:51.497 23:32:12 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f90d34bf-48ba-4223-ba91-d1752491786f 30 00:18:51.756 23:32:12 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 93ceeff6-07d5-4f45-897e-4ea04a957b77 MY_CLONE 00:18:52.016 23:32:12 -- target/nvmf_lvol.sh@49 -- # clone=ae66286c-8b1f-413e-9b70-4cc8021a2fda 00:18:52.016 23:32:12 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ae66286c-8b1f-413e-9b70-4cc8021a2fda 00:18:52.584 23:32:13 -- target/nvmf_lvol.sh@53 -- # wait 244637 00:19:00.701 Initializing NVMe Controllers 00:19:00.701 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:00.701 Controller IO queue size 128, less than required. 00:19:00.701 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:00.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:19:00.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:19:00.701 Initialization complete. Launching workers. 00:19:00.701 ======================================================== 00:19:00.701 Latency(us) 00:19:00.701 Device Information : IOPS MiB/s Average min max 00:19:00.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11024.21 43.06 11617.48 1932.16 78486.61 00:19:00.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10847.62 42.37 11806.27 2045.08 74619.46 00:19:00.701 ======================================================== 00:19:00.701 Total : 21871.83 85.44 11711.11 1932.16 78486.61 00:19:00.701 00:19:00.701 23:32:21 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:00.961 23:32:21 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f90d34bf-48ba-4223-ba91-d1752491786f 00:19:01.529 23:32:22 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 65925977-9b87-4011-8d20-fc1e957becd4 00:19:02.096 23:32:22 -- target/nvmf_lvol.sh@60 -- # rm -f 00:19:02.096 23:32:22 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:19:02.096 23:32:22 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:19:02.096 23:32:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:02.097 23:32:22 -- nvmf/common.sh@116 -- # sync 00:19:02.097 23:32:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:02.097 23:32:22 -- nvmf/common.sh@119 -- # set +e 00:19:02.097 23:32:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:02.097 23:32:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:02.097 rmmod nvme_tcp 00:19:02.097 rmmod nvme_fabrics 00:19:02.097 rmmod nvme_keyring 00:19:02.097 23:32:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:02.097 23:32:22 -- nvmf/common.sh@123 -- # set -e 00:19:02.097 23:32:22 -- nvmf/common.sh@124 -- # return 0 00:19:02.097 23:32:22 -- nvmf/common.sh@477 -- # '[' -n 244054 ']' 00:19:02.097 23:32:22 -- nvmf/common.sh@478 -- # killprocess 244054 00:19:02.097 23:32:22 -- common/autotest_common.sh@926 -- # '[' -z 244054 ']' 00:19:02.097 23:32:22 -- common/autotest_common.sh@930 -- # kill -0 244054 00:19:02.097 23:32:22 -- common/autotest_common.sh@931 -- # uname 00:19:02.097 23:32:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:02.097 23:32:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 244054 00:19:02.097 23:32:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:02.097 23:32:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:02.097 23:32:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 244054' 00:19:02.097 killing process with pid 244054 00:19:02.097 23:32:22 -- common/autotest_common.sh@945 -- # kill 244054 00:19:02.097 23:32:22 -- common/autotest_common.sh@950 -- # wait 244054 00:19:02.355 23:32:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:02.355 23:32:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:02.355 23:32:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:02.355 23:32:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:02.355 23:32:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:02.355 23:32:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.355 23:32:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.355 23:32:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.887 23:32:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:04.887 00:19:04.887 real 0m22.022s 00:19:04.887 user 1m13.490s 00:19:04.887 sys 0m6.710s 00:19:04.887 23:32:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:04.887 23:32:25 -- common/autotest_common.sh@10 -- # set +x 00:19:04.887 ************************************ 00:19:04.887 END TEST nvmf_lvol 00:19:04.887 ************************************ 00:19:04.887 23:32:25 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:04.887 23:32:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:04.887 23:32:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:04.887 23:32:25 -- common/autotest_common.sh@10 -- # set +x 00:19:04.887 ************************************ 00:19:04.887 START TEST nvmf_lvs_grow 00:19:04.887 ************************************ 00:19:04.887 23:32:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:04.887 * Looking for test storage... 00:19:04.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:04.888 23:32:25 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.888 23:32:25 -- nvmf/common.sh@7 -- # uname -s 00:19:04.888 23:32:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.888 23:32:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.888 23:32:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.888 23:32:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.888 23:32:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.888 23:32:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.888 23:32:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.888 23:32:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.888 23:32:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.888 23:32:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.888 23:32:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:04.888 23:32:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:04.888 23:32:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.888 23:32:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.888 23:32:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.888 23:32:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.888 23:32:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.888 23:32:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.888 23:32:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.888 23:32:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.888 23:32:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.888 23:32:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.888 23:32:25 -- paths/export.sh@5 -- # export PATH 00:19:04.888 23:32:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.888 23:32:25 -- nvmf/common.sh@46 -- # : 0 00:19:04.888 23:32:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:04.888 23:32:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:04.888 23:32:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:04.888 23:32:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.888 23:32:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.888 23:32:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:04.888 23:32:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:04.888 23:32:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:04.888 23:32:25 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:04.888 23:32:25 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:04.888 23:32:25 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:19:04.888 23:32:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:04.888 23:32:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.888 23:32:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:04.888 23:32:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:04.888 23:32:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:04.888 23:32:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.888 23:32:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.888 23:32:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.888 23:32:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:04.888 23:32:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:04.888 23:32:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:04.888 23:32:25 -- common/autotest_common.sh@10 -- # set +x 00:19:07.425 23:32:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:07.425 23:32:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:07.425 23:32:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:07.425 23:32:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:07.425 23:32:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:07.425 23:32:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:07.425 23:32:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:07.425 23:32:27 -- nvmf/common.sh@294 -- # net_devs=() 00:19:07.425 23:32:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:07.425 23:32:27 -- nvmf/common.sh@295 -- # e810=() 00:19:07.425 23:32:27 -- nvmf/common.sh@295 -- # local -ga e810 00:19:07.425 23:32:27 -- nvmf/common.sh@296 -- # x722=() 00:19:07.425 23:32:27 -- nvmf/common.sh@296 -- # local -ga x722 00:19:07.425 23:32:27 -- nvmf/common.sh@297 -- # mlx=() 00:19:07.425 23:32:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:07.425 23:32:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.425 23:32:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.425 23:32:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.425 23:32:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.425 23:32:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.425 23:32:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.425 23:32:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.425 23:32:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.425 23:32:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.425 23:32:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.425 23:32:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.425 23:32:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:07.425 23:32:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:07.425 23:32:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:07.425 23:32:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:07.425 23:32:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:07.425 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:07.425 23:32:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:07.425 23:32:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:07.425 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:07.425 23:32:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:07.425 23:32:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:07.425 23:32:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.425 23:32:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:07.425 23:32:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.425 23:32:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:07.425 Found net devices under 0000:84:00.0: cvl_0_0 00:19:07.425 23:32:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.425 23:32:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:07.425 23:32:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.425 23:32:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:07.425 23:32:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.425 23:32:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:07.425 Found net devices under 0000:84:00.1: cvl_0_1 00:19:07.425 23:32:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.425 23:32:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:07.425 23:32:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:07.425 23:32:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:07.425 23:32:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:07.425 23:32:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.425 23:32:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.425 23:32:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.425 23:32:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:07.425 23:32:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.425 23:32:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.425 23:32:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:07.425 23:32:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.425 23:32:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.425 23:32:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:07.425 23:32:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:07.425 23:32:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.425 23:32:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.425 23:32:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.425 23:32:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.425 23:32:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:07.425 23:32:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.425 23:32:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.425 23:32:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.425 23:32:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:07.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:19:07.425 00:19:07.425 --- 10.0.0.2 ping statistics --- 00:19:07.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.425 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:19:07.425 23:32:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:19:07.425 00:19:07.425 --- 10.0.0.1 ping statistics --- 00:19:07.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.425 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:19:07.425 23:32:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.425 23:32:28 -- nvmf/common.sh@410 -- # return 0 00:19:07.425 23:32:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:07.425 23:32:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.425 23:32:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:07.425 23:32:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:07.425 23:32:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.425 23:32:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:07.425 23:32:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:07.425 23:32:28 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:19:07.425 23:32:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:07.425 23:32:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:07.425 23:32:28 -- common/autotest_common.sh@10 -- # set +x 00:19:07.425 23:32:28 -- nvmf/common.sh@469 -- # nvmfpid=248090 00:19:07.425 23:32:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:07.425 23:32:28 -- nvmf/common.sh@470 -- # waitforlisten 248090 00:19:07.425 23:32:28 -- common/autotest_common.sh@819 -- # '[' -z 248090 ']' 00:19:07.425 23:32:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.425 23:32:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:07.425 23:32:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.425 23:32:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:07.425 23:32:28 -- common/autotest_common.sh@10 -- # set +x 00:19:07.425 [2024-07-11 23:32:28.212625] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:07.425 [2024-07-11 23:32:28.212717] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.425 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.425 [2024-07-11 23:32:28.299730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.684 [2024-07-11 23:32:28.395417] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:07.684 [2024-07-11 23:32:28.395589] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.684 [2024-07-11 23:32:28.395610] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.684 [2024-07-11 23:32:28.395625] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.684 [2024-07-11 23:32:28.395658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.637 23:32:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:08.637 23:32:29 -- common/autotest_common.sh@852 -- # return 0 00:19:08.637 23:32:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:08.637 23:32:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:08.637 23:32:29 -- common/autotest_common.sh@10 -- # set +x 00:19:08.637 23:32:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.637 23:32:29 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:08.945 [2024-07-11 23:32:29.841573] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.945 23:32:29 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:19:08.945 23:32:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:08.945 23:32:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:08.945 23:32:29 -- common/autotest_common.sh@10 -- # set +x 00:19:08.945 ************************************ 00:19:08.945 START TEST lvs_grow_clean 00:19:08.945 ************************************ 00:19:08.945 23:32:29 -- common/autotest_common.sh@1104 -- # lvs_grow 00:19:08.946 23:32:29 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:08.946 23:32:29 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:08.946 23:32:29 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:08.946 23:32:29 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:08.946 23:32:29 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:08.946 23:32:29 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:08.946 23:32:29 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:08.946 23:32:29 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:08.946 23:32:29 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:09.511 23:32:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:09.511 23:32:30 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:09.511 23:32:30 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b17a0a73-f134-47c1-aac9-f314c0a3943a 00:19:09.511 23:32:30 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17a0a73-f134-47c1-aac9-f314c0a3943a 00:19:09.511 23:32:30 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:10.081 23:32:30 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:10.081 23:32:30 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:10.081 23:32:30 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b17a0a73-f134-47c1-aac9-f314c0a3943a lvol 150 00:19:10.081 23:32:31 -- target/nvmf_lvs_grow.sh@33 -- # lvol=37ac605f-8fff-4b0f-b08f-a04494a68a1d 00:19:10.081 23:32:31 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:10.081 23:32:31 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:10.339 [2024-07-11 23:32:31.269677] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:10.340 [2024-07-11 23:32:31.269789] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:10.340 true 00:19:10.340 23:32:31 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17a0a73-f134-47c1-aac9-f314c0a3943a 00:19:10.340 23:32:31 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:10.909 23:32:31 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:10.910 23:32:31 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:10.910 23:32:31 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 37ac605f-8fff-4b0f-b08f-a04494a68a1d 00:19:11.476 23:32:32 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:11.476 [2024-07-11 23:32:32.393169] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.476 23:32:32 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:12.043 23:32:32 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=248680 00:19:12.043 23:32:32 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:12.043 23:32:32 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:12.043 23:32:32 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 248680 /var/tmp/bdevperf.sock 00:19:12.043 23:32:32 -- common/autotest_common.sh@819 -- # '[' -z 248680 ']' 00:19:12.043 23:32:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.043 23:32:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:12.043 23:32:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.043 23:32:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:12.043 23:32:32 -- common/autotest_common.sh@10 -- # set +x 00:19:12.043 [2024-07-11 23:32:32.960580] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:12.043 [2024-07-11 23:32:32.960663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248680 ] 00:19:12.301 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.301 [2024-07-11 23:32:33.028738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.301 [2024-07-11 23:32:33.119240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.301 23:32:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:12.301 23:32:33 -- common/autotest_common.sh@852 -- # return 0 00:19:12.301 23:32:33 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:12.866 Nvme0n1 00:19:12.866 23:32:33 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:13.124 [ 00:19:13.124 { 00:19:13.124 "name": "Nvme0n1", 00:19:13.124 "aliases": [ 00:19:13.124 "37ac605f-8fff-4b0f-b08f-a04494a68a1d" 00:19:13.124 ], 00:19:13.124 "product_name": "NVMe disk", 00:19:13.124 "block_size": 4096, 00:19:13.124 "num_blocks": 38912, 00:19:13.124 "uuid": "37ac605f-8fff-4b0f-b08f-a04494a68a1d", 00:19:13.124 "assigned_rate_limits": { 00:19:13.124 "rw_ios_per_sec": 0, 00:19:13.124 "rw_mbytes_per_sec": 0, 00:19:13.124 "r_mbytes_per_sec": 0, 00:19:13.124 "w_mbytes_per_sec": 0 00:19:13.124 }, 00:19:13.124 "claimed": false, 00:19:13.124 "zoned": false, 00:19:13.124 "supported_io_types": { 00:19:13.124 "read": true, 00:19:13.124 "write": true, 00:19:13.124 "unmap": true, 00:19:13.124 "write_zeroes": true, 00:19:13.124 "flush": true, 00:19:13.124 "reset": true, 00:19:13.124 "compare": true, 00:19:13.124 "compare_and_write": true, 00:19:13.124 "abort": true, 00:19:13.124 "nvme_admin": true, 00:19:13.124 "nvme_io": true 00:19:13.124 }, 00:19:13.124 "driver_specific": { 00:19:13.124 "nvme": [ 00:19:13.124 { 00:19:13.124 "trid": { 00:19:13.124 "trtype": "TCP", 00:19:13.124 "adrfam": "IPv4", 00:19:13.124 "traddr": "10.0.0.2", 00:19:13.124 "trsvcid": "4420", 00:19:13.124 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:13.124 }, 00:19:13.124 "ctrlr_data": { 00:19:13.124 "cntlid": 1, 00:19:13.124 "vendor_id": "0x8086", 00:19:13.124 "model_number": "SPDK bdev Controller", 00:19:13.124 "serial_number": "SPDK0", 00:19:13.124 "firmware_revision": "24.01.1", 00:19:13.124 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:13.124 "oacs": { 00:19:13.124 "security": 0, 00:19:13.124 "format": 0, 00:19:13.124 "firmware": 0, 00:19:13.124 "ns_manage": 0 00:19:13.124 }, 00:19:13.124 "multi_ctrlr": true, 00:19:13.124 "ana_reporting": false 00:19:13.124 }, 00:19:13.124 "vs": { 00:19:13.124 "nvme_version": "1.3" 00:19:13.124 }, 00:19:13.124 "ns_data": { 00:19:13.124 "id": 1, 00:19:13.124 "can_share": true 00:19:13.124 } 00:19:13.124 } 00:19:13.124 ], 00:19:13.124 "mp_policy": "active_passive" 00:19:13.124 } 00:19:13.124 } 00:19:13.124 ] 00:19:13.124 23:32:33 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=248818 00:19:13.124 23:32:33 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:13.124 23:32:33 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:13.382 Running I/O for 10 seconds... 00:19:14.315 Latency(us) 00:19:14.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:14.315 Nvme0n1 : 1.00 13477.00 52.64 0.00 0.00 0.00 0.00 0.00 00:19:14.315 =================================================================================================================== 00:19:14.315 Total : 13477.00 52.64 0.00 0.00 0.00 0.00 0.00 00:19:14.315 00:19:15.249 23:32:36 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b17a0a73-f134-47c1-aac9-f314c0a3943a 00:19:15.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:15.507 Nvme0n1 : 2.00 13622.50 53.21 0.00 0.00 0.00 0.00 0.00 00:19:15.507 =================================================================================================================== 00:19:15.507 Total : 13622.50 53.21 0.00 0.00 0.00 0.00 0.00 00:19:15.507 00:19:15.507 true 00:19:15.507 23:32:36 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17a0a73-f134-47c1-aac9-f314c0a3943a 00:19:15.507 23:32:36 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:15.765 23:32:36 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:15.765 23:32:36 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:15.765 23:32:36 -- target/nvmf_lvs_grow.sh@65 -- # wait 248818 00:19:16.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:16.332 Nvme0n1 : 3.00 13743.00 53.68 0.00 0.00 0.00 0.00 0.00 00:19:16.332 =================================================================================================================== 00:19:16.332 Total : 13743.00 53.68 0.00 0.00 0.00 0.00 0.00 00:19:16.332 00:19:17.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:17.707 Nvme0n1 : 4.00 13825.25 54.00 0.00 0.00 0.00 0.00 0.00 00:19:17.707 =================================================================================================================== 00:19:17.707 Total : 13825.25 54.00 0.00 0.00 0.00 0.00 0.00 00:19:17.707 00:19:18.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:18.641 Nvme0n1 : 5.00 13885.80 54.24 0.00 0.00 0.00 0.00 0.00 00:19:18.641 =================================================================================================================== 00:19:18.641 Total : 13885.80 54.24 0.00 0.00 0.00 0.00 0.00 00:19:18.641 00:19:19.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:19.575 Nvme0n1 : 6.00 13918.17 54.37 0.00 0.00 0.00 0.00 0.00 00:19:19.575 =================================================================================================================== 00:19:19.575 Total : 13918.17 54.37 0.00 0.00 0.00 0.00 0.00 00:19:19.575 00:19:20.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:20.511 Nvme0n1 : 7.00 13956.14 54.52 0.00 0.00 0.00 0.00 0.00 00:19:20.511 =================================================================================================================== 00:19:20.511 Total : 13956.14 54.52 0.00 0.00 0.00 0.00 0.00 00:19:20.511 00:19:21.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:21.449 Nvme0n1 : 8.00 13985.62 54.63 0.00 0.00 0.00 0.00 0.00 00:19:21.449 =================================================================================================================== 00:19:21.449 Total : 13985.62 54.63 0.00 0.00 0.00 0.00 0.00 00:19:21.449 00:19:22.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:22.384 Nvme0n1 : 9.00 14010.33 54.73 0.00 0.00 0.00 0.00 0.00 00:19:22.384 =================================================================================================================== 00:19:22.384 Total : 14010.33 54.73 0.00 0.00 0.00 0.00 0.00 00:19:22.384 00:19:23.318 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:23.318 Nvme0n1 : 10.00 14034.10 54.82 0.00 0.00 0.00 0.00 0.00 00:19:23.318 =================================================================================================================== 00:19:23.318 Total : 14034.10 54.82 0.00 0.00 0.00 0.00 0.00 00:19:23.318 00:19:23.577 00:19:23.577 Latency(us) 00:19:23.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:23.577 Nvme0n1 : 10.01 14034.29 54.82 0.00 0.00 9112.70 6990.51 16796.63 00:19:23.577 =================================================================================================================== 00:19:23.577 Total : 14034.29 54.82 0.00 0.00 9112.70 6990.51 16796.63 00:19:23.577 0 00:19:23.577 23:32:44 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 248680 00:19:23.577 23:32:44 -- common/autotest_common.sh@926 -- # '[' -z 248680 ']' 00:19:23.577 23:32:44 -- common/autotest_common.sh@930 -- # kill -0 248680 00:19:23.577 23:32:44 -- common/autotest_common.sh@931 -- # uname 00:19:23.577 23:32:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:23.577 23:32:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 248680 00:19:23.577 23:32:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:23.577 23:32:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:23.577 23:32:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 248680' 00:19:23.577 killing process with pid 248680 00:19:23.577 23:32:44 -- common/autotest_common.sh@945 -- # kill 248680 00:19:23.577 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.577 00:19:23.577 Latency(us) 00:19:23.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.577 =================================================================================================================== 00:19:23.577 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.577 23:32:44 -- common/autotest_common.sh@950 -- # wait 248680 00:19:23.836 23:32:44 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:24.404 23:32:45 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:24.404 23:32:45 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17a0a73-f134-47c1-aac9-f314c0a3943a 00:19:25.027 23:32:45 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:25.027 23:32:45 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:19:25.027 23:32:45 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:25.027 [2024-07-11 23:32:45.975895] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:25.286 23:32:46 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17a0a73-f134-47c1-aac9-f314c0a3943a 00:19:25.286 23:32:46 -- common/autotest_common.sh@640 -- # local es=0 00:19:25.286 23:32:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17a0a73-f134-47c1-aac9-f314c0a3943a 00:19:25.286 23:32:46 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.286 23:32:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:25.286 23:32:46 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.286 23:32:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:25.286 23:32:46 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.286 23:32:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:25.286 23:32:46 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.286 23:32:46 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:25.286 23:32:46 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17a0a73-f134-47c1-aac9-f314c0a3943a 00:19:25.543 request: 00:19:25.543 { 00:19:25.543 "uuid": "b17a0a73-f134-47c1-aac9-f314c0a3943a", 00:19:25.543 "method": "bdev_lvol_get_lvstores", 00:19:25.543 "req_id": 1 00:19:25.543 } 00:19:25.543 Got JSON-RPC error response 00:19:25.543 response: 00:19:25.543 { 00:19:25.543 "code": -19, 00:19:25.543 "message": "No such device" 00:19:25.543 } 00:19:25.543 23:32:46 -- common/autotest_common.sh@643 -- # es=1 00:19:25.543 23:32:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:25.543 23:32:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:25.543 23:32:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:25.543 23:32:46 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:25.801 aio_bdev 00:19:25.801 23:32:46 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 37ac605f-8fff-4b0f-b08f-a04494a68a1d 00:19:25.801 23:32:46 -- common/autotest_common.sh@887 -- # local bdev_name=37ac605f-8fff-4b0f-b08f-a04494a68a1d 00:19:25.801 23:32:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:25.801 23:32:46 -- common/autotest_common.sh@889 -- # local i 00:19:25.801 23:32:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:25.801 23:32:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:25.801 23:32:46 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:26.060 23:32:46 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 37ac605f-8fff-4b0f-b08f-a04494a68a1d -t 2000 00:19:26.320 [ 00:19:26.320 { 00:19:26.320 "name": "37ac605f-8fff-4b0f-b08f-a04494a68a1d", 00:19:26.320 "aliases": [ 00:19:26.320 "lvs/lvol" 00:19:26.320 ], 00:19:26.320 "product_name": "Logical Volume", 00:19:26.320 "block_size": 4096, 00:19:26.320 "num_blocks": 38912, 00:19:26.320 "uuid": "37ac605f-8fff-4b0f-b08f-a04494a68a1d", 00:19:26.320 "assigned_rate_limits": { 00:19:26.320 "rw_ios_per_sec": 0, 00:19:26.320 "rw_mbytes_per_sec": 0, 00:19:26.320 "r_mbytes_per_sec": 0, 00:19:26.320 "w_mbytes_per_sec": 0 00:19:26.320 }, 00:19:26.320 "claimed": false, 00:19:26.320 "zoned": false, 00:19:26.320 "supported_io_types": { 00:19:26.320 "read": true, 00:19:26.320 "write": true, 00:19:26.320 "unmap": true, 00:19:26.320 "write_zeroes": true, 00:19:26.320 "flush": false, 00:19:26.320 "reset": true, 00:19:26.320 "compare": false, 00:19:26.320 "compare_and_write": false, 00:19:26.320 "abort": false, 00:19:26.320 "nvme_admin": false, 00:19:26.320 "nvme_io": false 00:19:26.320 }, 00:19:26.320 "driver_specific": { 00:19:26.320 "lvol": { 00:19:26.320 "lvol_store_uuid": "b17a0a73-f134-47c1-aac9-f314c0a3943a", 00:19:26.320 "base_bdev": "aio_bdev", 00:19:26.320 "thin_provision": false, 00:19:26.320 "snapshot": false, 00:19:26.320 "clone": false, 00:19:26.320 "esnap_clone": false 00:19:26.320 } 00:19:26.320 } 00:19:26.320 } 00:19:26.320 ] 00:19:26.320 23:32:47 -- common/autotest_common.sh@895 -- # return 0 00:19:26.320 23:32:47 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17a0a73-f134-47c1-aac9-f314c0a3943a 00:19:26.320 23:32:47 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:26.579 23:32:47 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:26.579 23:32:47 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17a0a73-f134-47c1-aac9-f314c0a3943a 00:19:26.579 23:32:47 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:26.839 23:32:47 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:26.839 23:32:47 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 37ac605f-8fff-4b0f-b08f-a04494a68a1d 00:19:27.099 23:32:47 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b17a0a73-f134-47c1-aac9-f314c0a3943a 00:19:27.376 23:32:48 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:27.651 23:32:48 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:27.651 00:19:27.651 real 0m18.664s 00:19:27.651 user 0m18.311s 00:19:27.651 sys 0m2.145s 00:19:27.651 23:32:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.651 23:32:48 -- common/autotest_common.sh@10 -- # set +x 00:19:27.651 ************************************ 00:19:27.651 END TEST lvs_grow_clean 00:19:27.651 ************************************ 00:19:27.651 23:32:48 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:19:27.651 23:32:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:27.651 23:32:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:27.651 23:32:48 -- common/autotest_common.sh@10 -- # set +x 00:19:27.651 ************************************ 00:19:27.651 START TEST lvs_grow_dirty 00:19:27.651 ************************************ 00:19:27.651 23:32:48 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:19:27.651 23:32:48 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:27.651 23:32:48 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:27.651 23:32:48 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:27.651 23:32:48 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:27.651 23:32:48 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:27.651 23:32:48 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:27.651 23:32:48 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:27.651 23:32:48 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:27.651 23:32:48 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:28.217 23:32:48 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:28.217 23:32:48 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:28.217 23:32:49 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e9348572-c7eb-4f61-9a3e-f98038efacf8 00:19:28.217 23:32:49 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9348572-c7eb-4f61-9a3e-f98038efacf8 00:19:28.217 23:32:49 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:28.785 23:32:49 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:28.785 23:32:49 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:28.785 23:32:49 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e9348572-c7eb-4f61-9a3e-f98038efacf8 lvol 150 00:19:28.785 23:32:49 -- target/nvmf_lvs_grow.sh@33 -- # lvol=6115d148-3755-4a24-ba81-69ddb58680bd 00:19:28.785 23:32:49 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:28.785 23:32:49 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:29.044 [2024-07-11 23:32:49.958715] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:29.044 [2024-07-11 23:32:49.958809] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:29.044 true 00:19:29.044 23:32:49 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9348572-c7eb-4f61-9a3e-f98038efacf8 00:19:29.044 23:32:49 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:29.303 23:32:50 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:29.303 23:32:50 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:29.870 23:32:50 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6115d148-3755-4a24-ba81-69ddb58680bd 00:19:29.870 23:32:50 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:30.129 23:32:51 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:30.697 23:32:51 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=250911 00:19:30.697 23:32:51 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:30.697 23:32:51 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:30.697 23:32:51 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 250911 /var/tmp/bdevperf.sock 00:19:30.697 23:32:51 -- common/autotest_common.sh@819 -- # '[' -z 250911 ']' 00:19:30.697 23:32:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.697 23:32:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:30.697 23:32:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.697 23:32:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:30.697 23:32:51 -- common/autotest_common.sh@10 -- # set +x 00:19:30.697 [2024-07-11 23:32:51.388779] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:30.697 [2024-07-11 23:32:51.388867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250911 ] 00:19:30.697 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.697 [2024-07-11 23:32:51.457431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.697 [2024-07-11 23:32:51.550880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.263 23:32:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:31.263 23:32:51 -- common/autotest_common.sh@852 -- # return 0 00:19:31.263 23:32:51 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:31.830 Nvme0n1 00:19:31.830 23:32:52 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:32.088 [ 00:19:32.088 { 00:19:32.088 "name": "Nvme0n1", 00:19:32.088 "aliases": [ 00:19:32.088 "6115d148-3755-4a24-ba81-69ddb58680bd" 00:19:32.088 ], 00:19:32.088 "product_name": "NVMe disk", 00:19:32.088 "block_size": 4096, 00:19:32.088 "num_blocks": 38912, 00:19:32.088 "uuid": "6115d148-3755-4a24-ba81-69ddb58680bd", 00:19:32.088 "assigned_rate_limits": { 00:19:32.088 "rw_ios_per_sec": 0, 00:19:32.088 "rw_mbytes_per_sec": 0, 00:19:32.088 "r_mbytes_per_sec": 0, 00:19:32.088 "w_mbytes_per_sec": 0 00:19:32.088 }, 00:19:32.088 "claimed": false, 00:19:32.088 "zoned": false, 00:19:32.088 "supported_io_types": { 00:19:32.088 "read": true, 00:19:32.088 "write": true, 00:19:32.088 "unmap": true, 00:19:32.088 "write_zeroes": true, 00:19:32.088 "flush": true, 00:19:32.088 "reset": true, 00:19:32.088 "compare": true, 00:19:32.088 "compare_and_write": true, 00:19:32.088 "abort": true, 00:19:32.088 "nvme_admin": true, 00:19:32.088 "nvme_io": true 00:19:32.088 }, 00:19:32.088 "driver_specific": { 00:19:32.088 "nvme": [ 00:19:32.088 { 00:19:32.088 "trid": { 00:19:32.088 "trtype": "TCP", 00:19:32.088 "adrfam": "IPv4", 00:19:32.088 "traddr": "10.0.0.2", 00:19:32.088 "trsvcid": "4420", 00:19:32.088 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:32.088 }, 00:19:32.088 "ctrlr_data": { 00:19:32.088 "cntlid": 1, 00:19:32.088 "vendor_id": "0x8086", 00:19:32.088 "model_number": "SPDK bdev Controller", 00:19:32.088 "serial_number": "SPDK0", 00:19:32.088 "firmware_revision": "24.01.1", 00:19:32.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:32.088 "oacs": { 00:19:32.088 "security": 0, 00:19:32.088 "format": 0, 00:19:32.088 "firmware": 0, 00:19:32.088 "ns_manage": 0 00:19:32.088 }, 00:19:32.088 "multi_ctrlr": true, 00:19:32.088 "ana_reporting": false 00:19:32.088 }, 00:19:32.089 "vs": { 00:19:32.089 "nvme_version": "1.3" 00:19:32.089 }, 00:19:32.089 "ns_data": { 00:19:32.089 "id": 1, 00:19:32.089 "can_share": true 00:19:32.089 } 00:19:32.089 } 00:19:32.089 ], 00:19:32.089 "mp_policy": "active_passive" 00:19:32.089 } 00:19:32.089 } 00:19:32.089 ] 00:19:32.089 23:32:52 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=251051 00:19:32.089 23:32:52 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:32.089 23:32:52 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.349 Running I/O for 10 seconds... 00:19:33.723 Latency(us) 00:19:33.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:33.723 Nvme0n1 : 1.00 13619.00 53.20 0.00 0.00 0.00 0.00 0.00 00:19:33.723 =================================================================================================================== 00:19:33.723 Total : 13619.00 53.20 0.00 0.00 0.00 0.00 0.00 00:19:33.723 00:19:34.290 23:32:54 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e9348572-c7eb-4f61-9a3e-f98038efacf8 00:19:34.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:34.290 Nvme0n1 : 2.00 13813.50 53.96 0.00 0.00 0.00 0.00 0.00 00:19:34.290 =================================================================================================================== 00:19:34.290 Total : 13813.50 53.96 0.00 0.00 0.00 0.00 0.00 00:19:34.290 00:19:34.548 true 00:19:34.549 23:32:55 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9348572-c7eb-4f61-9a3e-f98038efacf8 00:19:34.549 23:32:55 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:35.116 23:32:55 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:35.116 23:32:55 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:35.116 23:32:55 -- target/nvmf_lvs_grow.sh@65 -- # wait 251051 00:19:35.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:35.374 Nvme0n1 : 3.00 13873.00 54.19 0.00 0.00 0.00 0.00 0.00 00:19:35.374 =================================================================================================================== 00:19:35.374 Total : 13873.00 54.19 0.00 0.00 0.00 0.00 0.00 00:19:35.374 00:19:36.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:36.307 Nvme0n1 : 4.00 13938.75 54.45 0.00 0.00 0.00 0.00 0.00 00:19:36.307 =================================================================================================================== 00:19:36.307 Total : 13938.75 54.45 0.00 0.00 0.00 0.00 0.00 00:19:36.307 00:19:37.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:37.681 Nvme0n1 : 5.00 14000.60 54.69 0.00 0.00 0.00 0.00 0.00 00:19:37.681 =================================================================================================================== 00:19:37.681 Total : 14000.60 54.69 0.00 0.00 0.00 0.00 0.00 00:19:37.681 00:19:38.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:38.616 Nvme0n1 : 6.00 14032.50 54.81 0.00 0.00 0.00 0.00 0.00 00:19:38.616 =================================================================================================================== 00:19:38.616 Total : 14032.50 54.81 0.00 0.00 0.00 0.00 0.00 00:19:38.616 00:19:39.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:39.552 Nvme0n1 : 7.00 14064.43 54.94 0.00 0.00 0.00 0.00 0.00 00:19:39.552 =================================================================================================================== 00:19:39.552 Total : 14064.43 54.94 0.00 0.00 0.00 0.00 0.00 00:19:39.552 00:19:40.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:40.486 Nvme0n1 : 8.00 14094.38 55.06 0.00 0.00 0.00 0.00 0.00 00:19:40.486 =================================================================================================================== 00:19:40.487 Total : 14094.38 55.06 0.00 0.00 0.00 0.00 0.00 00:19:40.487 00:19:41.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:41.423 Nvme0n1 : 9.00 14111.44 55.12 0.00 0.00 0.00 0.00 0.00 00:19:41.423 =================================================================================================================== 00:19:41.423 Total : 14111.44 55.12 0.00 0.00 0.00 0.00 0.00 00:19:41.423 00:19:42.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:42.355 Nvme0n1 : 10.00 14129.90 55.19 0.00 0.00 0.00 0.00 0.00 00:19:42.355 =================================================================================================================== 00:19:42.355 Total : 14129.90 55.19 0.00 0.00 0.00 0.00 0.00 00:19:42.355 00:19:42.355 00:19:42.355 Latency(us) 00:19:42.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:42.355 Nvme0n1 : 10.01 14130.25 55.20 0.00 0.00 9051.07 2281.62 11942.12 00:19:42.355 =================================================================================================================== 00:19:42.355 Total : 14130.25 55.20 0.00 0.00 9051.07 2281.62 11942.12 00:19:42.355 0 00:19:42.355 23:33:03 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 250911 00:19:42.355 23:33:03 -- common/autotest_common.sh@926 -- # '[' -z 250911 ']' 00:19:42.355 23:33:03 -- common/autotest_common.sh@930 -- # kill -0 250911 00:19:42.355 23:33:03 -- common/autotest_common.sh@931 -- # uname 00:19:42.355 23:33:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:42.355 23:33:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 250911 00:19:42.614 23:33:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:42.614 23:33:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:42.614 23:33:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 250911' 00:19:42.614 killing process with pid 250911 00:19:42.614 23:33:03 -- common/autotest_common.sh@945 -- # kill 250911 00:19:42.614 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.614 00:19:42.614 Latency(us) 00:19:42.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.614 =================================================================================================================== 00:19:42.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.614 23:33:03 -- common/autotest_common.sh@950 -- # wait 250911 00:19:42.614 23:33:03 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:43.181 23:33:04 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9348572-c7eb-4f61-9a3e-f98038efacf8 00:19:43.181 23:33:04 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:43.748 23:33:04 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:43.748 23:33:04 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:19:43.748 23:33:04 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 248090 00:19:43.748 23:33:04 -- target/nvmf_lvs_grow.sh@74 -- # wait 248090 00:19:43.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 248090 Killed "${NVMF_APP[@]}" "$@" 00:19:43.748 23:33:04 -- target/nvmf_lvs_grow.sh@74 -- # true 00:19:43.748 23:33:04 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:19:43.748 23:33:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:43.748 23:33:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:43.748 23:33:04 -- common/autotest_common.sh@10 -- # set +x 00:19:43.748 23:33:04 -- nvmf/common.sh@469 -- # nvmfpid=252539 00:19:43.748 23:33:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:43.748 23:33:04 -- nvmf/common.sh@470 -- # waitforlisten 252539 00:19:43.748 23:33:04 -- common/autotest_common.sh@819 -- # '[' -z 252539 ']' 00:19:43.748 23:33:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.748 23:33:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:43.748 23:33:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.748 23:33:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:43.748 23:33:04 -- common/autotest_common.sh@10 -- # set +x 00:19:43.748 [2024-07-11 23:33:04.600395] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:43.748 [2024-07-11 23:33:04.600499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.748 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.748 [2024-07-11 23:33:04.681091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.006 [2024-07-11 23:33:04.773081] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:44.006 [2024-07-11 23:33:04.773258] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.006 [2024-07-11 23:33:04.773281] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.006 [2024-07-11 23:33:04.773296] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.006 [2024-07-11 23:33:04.773335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.940 23:33:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:44.940 23:33:05 -- common/autotest_common.sh@852 -- # return 0 00:19:44.940 23:33:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:44.940 23:33:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:44.940 23:33:05 -- common/autotest_common.sh@10 -- # set +x 00:19:44.940 23:33:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.940 23:33:05 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:45.198 [2024-07-11 23:33:05.925631] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:45.198 [2024-07-11 23:33:05.925785] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:45.198 [2024-07-11 23:33:05.925844] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:45.198 23:33:05 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:19:45.198 23:33:05 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 6115d148-3755-4a24-ba81-69ddb58680bd 00:19:45.198 23:33:05 -- common/autotest_common.sh@887 -- # local bdev_name=6115d148-3755-4a24-ba81-69ddb58680bd 00:19:45.198 23:33:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:45.198 23:33:05 -- common/autotest_common.sh@889 -- # local i 00:19:45.198 23:33:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:45.198 23:33:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:45.198 23:33:05 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:45.456 23:33:06 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6115d148-3755-4a24-ba81-69ddb58680bd -t 2000 00:19:45.714 [ 00:19:45.714 { 00:19:45.714 "name": "6115d148-3755-4a24-ba81-69ddb58680bd", 00:19:45.714 "aliases": [ 00:19:45.714 "lvs/lvol" 00:19:45.714 ], 00:19:45.714 "product_name": "Logical Volume", 00:19:45.714 "block_size": 4096, 00:19:45.714 "num_blocks": 38912, 00:19:45.714 "uuid": "6115d148-3755-4a24-ba81-69ddb58680bd", 00:19:45.714 "assigned_rate_limits": { 00:19:45.714 "rw_ios_per_sec": 0, 00:19:45.714 "rw_mbytes_per_sec": 0, 00:19:45.714 "r_mbytes_per_sec": 0, 00:19:45.714 "w_mbytes_per_sec": 0 00:19:45.714 }, 00:19:45.714 "claimed": false, 00:19:45.714 "zoned": false, 00:19:45.714 "supported_io_types": { 00:19:45.714 "read": true, 00:19:45.714 "write": true, 00:19:45.714 "unmap": true, 00:19:45.714 "write_zeroes": true, 00:19:45.714 "flush": false, 00:19:45.714 "reset": true, 00:19:45.714 "compare": false, 00:19:45.714 "compare_and_write": false, 00:19:45.714 "abort": false, 00:19:45.714 "nvme_admin": false, 00:19:45.714 "nvme_io": false 00:19:45.714 }, 00:19:45.714 "driver_specific": { 00:19:45.714 "lvol": { 00:19:45.714 "lvol_store_uuid": "e9348572-c7eb-4f61-9a3e-f98038efacf8", 00:19:45.714 "base_bdev": "aio_bdev", 00:19:45.714 "thin_provision": false, 00:19:45.714 "snapshot": false, 00:19:45.714 "clone": false, 00:19:45.714 "esnap_clone": false 00:19:45.714 } 00:19:45.714 } 00:19:45.714 } 00:19:45.714 ] 00:19:45.714 23:33:06 -- common/autotest_common.sh@895 -- # return 0 00:19:45.714 23:33:06 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9348572-c7eb-4f61-9a3e-f98038efacf8 00:19:45.714 23:33:06 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:19:45.994 23:33:06 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:19:45.994 23:33:06 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9348572-c7eb-4f61-9a3e-f98038efacf8 00:19:45.994 23:33:06 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:19:46.261 23:33:07 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:19:46.261 23:33:07 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:46.519 [2024-07-11 23:33:07.431291] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:46.777 23:33:07 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9348572-c7eb-4f61-9a3e-f98038efacf8 00:19:46.777 23:33:07 -- common/autotest_common.sh@640 -- # local es=0 00:19:46.777 23:33:07 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9348572-c7eb-4f61-9a3e-f98038efacf8 00:19:46.777 23:33:07 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:46.777 23:33:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:46.777 23:33:07 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:46.777 23:33:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:46.777 23:33:07 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:46.777 23:33:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:46.777 23:33:07 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:46.777 23:33:07 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:46.777 23:33:07 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9348572-c7eb-4f61-9a3e-f98038efacf8 00:19:47.034 request: 00:19:47.034 { 00:19:47.034 "uuid": "e9348572-c7eb-4f61-9a3e-f98038efacf8", 00:19:47.034 "method": "bdev_lvol_get_lvstores", 00:19:47.034 "req_id": 1 00:19:47.034 } 00:19:47.034 Got JSON-RPC error response 00:19:47.034 response: 00:19:47.034 { 00:19:47.034 "code": -19, 00:19:47.034 "message": "No such device" 00:19:47.034 } 00:19:47.034 23:33:07 -- common/autotest_common.sh@643 -- # es=1 00:19:47.034 23:33:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:47.034 23:33:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:47.034 23:33:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:47.034 23:33:07 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:47.292 aio_bdev 00:19:47.292 23:33:08 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 6115d148-3755-4a24-ba81-69ddb58680bd 00:19:47.292 23:33:08 -- common/autotest_common.sh@887 -- # local bdev_name=6115d148-3755-4a24-ba81-69ddb58680bd 00:19:47.292 23:33:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:47.292 23:33:08 -- common/autotest_common.sh@889 -- # local i 00:19:47.292 23:33:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:47.292 23:33:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:47.292 23:33:08 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:47.548 23:33:08 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6115d148-3755-4a24-ba81-69ddb58680bd -t 2000 00:19:47.805 [ 00:19:47.805 { 00:19:47.805 "name": "6115d148-3755-4a24-ba81-69ddb58680bd", 00:19:47.805 "aliases": [ 00:19:47.805 "lvs/lvol" 00:19:47.805 ], 00:19:47.805 "product_name": "Logical Volume", 00:19:47.805 "block_size": 4096, 00:19:47.805 "num_blocks": 38912, 00:19:47.805 "uuid": "6115d148-3755-4a24-ba81-69ddb58680bd", 00:19:47.805 "assigned_rate_limits": { 00:19:47.805 "rw_ios_per_sec": 0, 00:19:47.805 "rw_mbytes_per_sec": 0, 00:19:47.805 "r_mbytes_per_sec": 0, 00:19:47.805 "w_mbytes_per_sec": 0 00:19:47.805 }, 00:19:47.805 "claimed": false, 00:19:47.805 "zoned": false, 00:19:47.805 "supported_io_types": { 00:19:47.805 "read": true, 00:19:47.805 "write": true, 00:19:47.805 "unmap": true, 00:19:47.805 "write_zeroes": true, 00:19:47.805 "flush": false, 00:19:47.805 "reset": true, 00:19:47.805 "compare": false, 00:19:47.805 "compare_and_write": false, 00:19:47.805 "abort": false, 00:19:47.805 "nvme_admin": false, 00:19:47.805 "nvme_io": false 00:19:47.805 }, 00:19:47.805 "driver_specific": { 00:19:47.805 "lvol": { 00:19:47.805 "lvol_store_uuid": "e9348572-c7eb-4f61-9a3e-f98038efacf8", 00:19:47.805 "base_bdev": "aio_bdev", 00:19:47.805 "thin_provision": false, 00:19:47.805 "snapshot": false, 00:19:47.805 "clone": false, 00:19:47.805 "esnap_clone": false 00:19:47.805 } 00:19:47.805 } 00:19:47.805 } 00:19:47.805 ] 00:19:47.805 23:33:08 -- common/autotest_common.sh@895 -- # return 0 00:19:47.805 23:33:08 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9348572-c7eb-4f61-9a3e-f98038efacf8 00:19:47.805 23:33:08 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:48.062 23:33:08 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:48.062 23:33:08 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e9348572-c7eb-4f61-9a3e-f98038efacf8 00:19:48.062 23:33:08 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:48.319 23:33:09 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:48.319 23:33:09 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6115d148-3755-4a24-ba81-69ddb58680bd 00:19:48.577 23:33:09 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e9348572-c7eb-4f61-9a3e-f98038efacf8 00:19:48.834 23:33:09 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:49.400 23:33:10 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:49.400 00:19:49.400 real 0m21.520s 00:19:49.400 user 0m53.628s 00:19:49.400 sys 0m5.841s 00:19:49.400 23:33:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.400 23:33:10 -- common/autotest_common.sh@10 -- # set +x 00:19:49.400 ************************************ 00:19:49.400 END TEST lvs_grow_dirty 00:19:49.400 ************************************ 00:19:49.400 23:33:10 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:19:49.400 23:33:10 -- common/autotest_common.sh@796 -- # type=--id 00:19:49.400 23:33:10 -- common/autotest_common.sh@797 -- # id=0 00:19:49.400 23:33:10 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:19:49.400 23:33:10 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:49.400 23:33:10 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:19:49.400 23:33:10 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:19:49.400 23:33:10 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:19:49.400 23:33:10 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:49.400 nvmf_trace.0 00:19:49.400 23:33:10 -- common/autotest_common.sh@811 -- # return 0 00:19:49.400 23:33:10 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:19:49.400 23:33:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:49.400 23:33:10 -- nvmf/common.sh@116 -- # sync 00:19:49.400 23:33:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:49.400 23:33:10 -- nvmf/common.sh@119 -- # set +e 00:19:49.400 23:33:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:49.400 23:33:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:49.400 rmmod nvme_tcp 00:19:49.400 rmmod nvme_fabrics 00:19:49.400 rmmod nvme_keyring 00:19:49.400 23:33:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:49.400 23:33:10 -- nvmf/common.sh@123 -- # set -e 00:19:49.400 23:33:10 -- nvmf/common.sh@124 -- # return 0 00:19:49.400 23:33:10 -- nvmf/common.sh@477 -- # '[' -n 252539 ']' 00:19:49.400 23:33:10 -- nvmf/common.sh@478 -- # killprocess 252539 00:19:49.400 23:33:10 -- common/autotest_common.sh@926 -- # '[' -z 252539 ']' 00:19:49.400 23:33:10 -- common/autotest_common.sh@930 -- # kill -0 252539 00:19:49.400 23:33:10 -- common/autotest_common.sh@931 -- # uname 00:19:49.400 23:33:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:49.400 23:33:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 252539 00:19:49.400 23:33:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:49.400 23:33:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:49.400 23:33:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 252539' 00:19:49.400 killing process with pid 252539 00:19:49.400 23:33:10 -- common/autotest_common.sh@945 -- # kill 252539 00:19:49.400 23:33:10 -- common/autotest_common.sh@950 -- # wait 252539 00:19:49.659 23:33:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:49.659 23:33:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:49.659 23:33:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:49.659 23:33:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:49.659 23:33:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:49.659 23:33:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.659 23:33:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.659 23:33:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.195 23:33:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:52.195 00:19:52.195 real 0m47.233s 00:19:52.195 user 1m19.462s 00:19:52.195 sys 0m10.585s 00:19:52.195 23:33:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.195 23:33:12 -- common/autotest_common.sh@10 -- # set +x 00:19:52.195 ************************************ 00:19:52.195 END TEST nvmf_lvs_grow 00:19:52.195 ************************************ 00:19:52.195 23:33:12 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:52.195 23:33:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:52.195 23:33:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:52.195 23:33:12 -- common/autotest_common.sh@10 -- # set +x 00:19:52.195 ************************************ 00:19:52.195 START TEST nvmf_bdev_io_wait 00:19:52.195 ************************************ 00:19:52.195 23:33:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:52.195 * Looking for test storage... 00:19:52.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:52.195 23:33:12 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.195 23:33:12 -- nvmf/common.sh@7 -- # uname -s 00:19:52.195 23:33:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.195 23:33:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.195 23:33:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.195 23:33:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.195 23:33:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.195 23:33:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.195 23:33:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.195 23:33:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.195 23:33:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.195 23:33:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.195 23:33:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:52.195 23:33:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:52.195 23:33:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.195 23:33:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.195 23:33:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.195 23:33:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.195 23:33:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.195 23:33:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.195 23:33:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.195 23:33:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.195 23:33:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.195 23:33:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.195 23:33:12 -- paths/export.sh@5 -- # export PATH 00:19:52.195 23:33:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.195 23:33:12 -- nvmf/common.sh@46 -- # : 0 00:19:52.195 23:33:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:52.195 23:33:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:52.195 23:33:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:52.195 23:33:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.195 23:33:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.195 23:33:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:52.195 23:33:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:52.195 23:33:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:52.195 23:33:12 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:52.195 23:33:12 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:52.195 23:33:12 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:52.195 23:33:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:52.195 23:33:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.195 23:33:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:52.195 23:33:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:52.195 23:33:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:52.195 23:33:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.195 23:33:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.195 23:33:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.195 23:33:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:52.195 23:33:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:52.195 23:33:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:52.195 23:33:12 -- common/autotest_common.sh@10 -- # set +x 00:19:54.735 23:33:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:54.735 23:33:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:54.735 23:33:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:54.735 23:33:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:54.735 23:33:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:54.735 23:33:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:54.735 23:33:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:54.735 23:33:15 -- nvmf/common.sh@294 -- # net_devs=() 00:19:54.735 23:33:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:54.735 23:33:15 -- nvmf/common.sh@295 -- # e810=() 00:19:54.735 23:33:15 -- nvmf/common.sh@295 -- # local -ga e810 00:19:54.735 23:33:15 -- nvmf/common.sh@296 -- # x722=() 00:19:54.735 23:33:15 -- nvmf/common.sh@296 -- # local -ga x722 00:19:54.735 23:33:15 -- nvmf/common.sh@297 -- # mlx=() 00:19:54.735 23:33:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:54.735 23:33:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:54.735 23:33:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:54.735 23:33:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:54.735 23:33:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:54.735 23:33:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:54.735 23:33:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:54.735 23:33:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:54.735 23:33:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:54.735 23:33:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:54.735 23:33:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:54.735 23:33:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:54.735 23:33:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:54.735 23:33:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:54.735 23:33:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:54.735 23:33:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:54.735 23:33:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:54.735 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:54.735 23:33:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:54.735 23:33:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:54.735 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:54.735 23:33:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:54.735 23:33:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:54.735 23:33:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.735 23:33:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:54.735 23:33:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.735 23:33:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:54.735 Found net devices under 0000:84:00.0: cvl_0_0 00:19:54.735 23:33:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.735 23:33:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:54.735 23:33:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.735 23:33:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:54.735 23:33:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.735 23:33:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:54.735 Found net devices under 0000:84:00.1: cvl_0_1 00:19:54.735 23:33:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.735 23:33:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:54.735 23:33:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:54.735 23:33:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:54.735 23:33:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:54.735 23:33:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.735 23:33:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:54.735 23:33:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:54.735 23:33:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:54.735 23:33:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:54.735 23:33:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:54.735 23:33:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:54.735 23:33:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:54.735 23:33:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.735 23:33:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:54.735 23:33:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:54.735 23:33:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:54.735 23:33:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:54.735 23:33:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:54.735 23:33:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:54.735 23:33:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:54.735 23:33:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:54.735 23:33:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:54.735 23:33:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:54.735 23:33:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:54.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:19:54.735 00:19:54.735 --- 10.0.0.2 ping statistics --- 00:19:54.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.735 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:54.735 23:33:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:54.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:19:54.735 00:19:54.735 --- 10.0.0.1 ping statistics --- 00:19:54.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.735 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:19:54.735 23:33:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.736 23:33:15 -- nvmf/common.sh@410 -- # return 0 00:19:54.736 23:33:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:54.736 23:33:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.736 23:33:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:54.736 23:33:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:54.736 23:33:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.736 23:33:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:54.736 23:33:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:54.736 23:33:15 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:54.736 23:33:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:54.736 23:33:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:54.736 23:33:15 -- common/autotest_common.sh@10 -- # set +x 00:19:54.736 23:33:15 -- nvmf/common.sh@469 -- # nvmfpid=255869 00:19:54.736 23:33:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:54.736 23:33:15 -- nvmf/common.sh@470 -- # waitforlisten 255869 00:19:54.736 23:33:15 -- common/autotest_common.sh@819 -- # '[' -z 255869 ']' 00:19:54.736 23:33:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.736 23:33:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:54.736 23:33:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.736 23:33:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:54.736 23:33:15 -- common/autotest_common.sh@10 -- # set +x 00:19:54.736 [2024-07-11 23:33:15.571370] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:54.736 [2024-07-11 23:33:15.571549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.736 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.736 [2024-07-11 23:33:15.680872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:54.994 [2024-07-11 23:33:15.778723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:54.994 [2024-07-11 23:33:15.778890] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.994 [2024-07-11 23:33:15.778910] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.994 [2024-07-11 23:33:15.778924] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.994 [2024-07-11 23:33:15.778996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.994 [2024-07-11 23:33:15.779098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.995 [2024-07-11 23:33:15.779157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:54.995 [2024-07-11 23:33:15.779162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.995 23:33:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:54.995 23:33:15 -- common/autotest_common.sh@852 -- # return 0 00:19:54.995 23:33:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:54.995 23:33:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:54.995 23:33:15 -- common/autotest_common.sh@10 -- # set +x 00:19:54.995 23:33:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.995 23:33:15 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:54.995 23:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.995 23:33:15 -- common/autotest_common.sh@10 -- # set +x 00:19:54.995 23:33:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.995 23:33:15 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:54.995 23:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.995 23:33:15 -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 23:33:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.255 23:33:15 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:55.255 23:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.255 23:33:15 -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 [2024-07-11 23:33:15.956132] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.255 23:33:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.255 23:33:15 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:55.255 23:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.255 23:33:15 -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 Malloc0 00:19:55.255 23:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.255 23:33:16 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:55.255 23:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.255 23:33:16 -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 23:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.255 23:33:16 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:55.255 23:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.255 23:33:16 -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 23:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.255 23:33:16 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.255 23:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.255 23:33:16 -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 [2024-07-11 23:33:16.025240] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.255 23:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.255 23:33:16 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=255892 00:19:55.255 23:33:16 -- target/bdev_io_wait.sh@30 -- # READ_PID=255893 00:19:55.255 23:33:16 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:55.255 23:33:16 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:55.255 23:33:16 -- nvmf/common.sh@520 -- # config=() 00:19:55.255 23:33:16 -- nvmf/common.sh@520 -- # local subsystem config 00:19:55.255 23:33:16 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=255896 00:19:55.255 23:33:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:55.255 23:33:16 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:55.256 23:33:16 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:55.256 23:33:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:55.256 { 00:19:55.256 "params": { 00:19:55.256 "name": "Nvme$subsystem", 00:19:55.256 "trtype": "$TEST_TRANSPORT", 00:19:55.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.256 "adrfam": "ipv4", 00:19:55.256 "trsvcid": "$NVMF_PORT", 00:19:55.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.256 "hdgst": ${hdgst:-false}, 00:19:55.256 "ddgst": ${ddgst:-false} 00:19:55.256 }, 00:19:55.256 "method": "bdev_nvme_attach_controller" 00:19:55.256 } 00:19:55.256 EOF 00:19:55.256 )") 00:19:55.256 23:33:16 -- nvmf/common.sh@520 -- # config=() 00:19:55.256 23:33:16 -- nvmf/common.sh@520 -- # local subsystem config 00:19:55.256 23:33:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:55.256 23:33:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:55.256 { 00:19:55.256 "params": { 00:19:55.256 "name": "Nvme$subsystem", 00:19:55.256 "trtype": "$TEST_TRANSPORT", 00:19:55.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.256 "adrfam": "ipv4", 00:19:55.256 "trsvcid": "$NVMF_PORT", 00:19:55.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.256 "hdgst": ${hdgst:-false}, 00:19:55.256 "ddgst": ${ddgst:-false} 00:19:55.256 }, 00:19:55.256 "method": "bdev_nvme_attach_controller" 00:19:55.256 } 00:19:55.256 EOF 00:19:55.256 )") 00:19:55.256 23:33:16 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:55.256 23:33:16 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:55.256 23:33:16 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=255898 00:19:55.256 23:33:16 -- nvmf/common.sh@520 -- # config=() 00:19:55.256 23:33:16 -- target/bdev_io_wait.sh@35 -- # sync 00:19:55.256 23:33:16 -- nvmf/common.sh@520 -- # local subsystem config 00:19:55.256 23:33:16 -- nvmf/common.sh@542 -- # cat 00:19:55.256 23:33:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:55.256 23:33:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:55.256 { 00:19:55.256 "params": { 00:19:55.256 "name": "Nvme$subsystem", 00:19:55.256 "trtype": "$TEST_TRANSPORT", 00:19:55.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.256 "adrfam": "ipv4", 00:19:55.256 "trsvcid": "$NVMF_PORT", 00:19:55.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.256 "hdgst": ${hdgst:-false}, 00:19:55.256 "ddgst": ${ddgst:-false} 00:19:55.256 }, 00:19:55.256 "method": "bdev_nvme_attach_controller" 00:19:55.256 } 00:19:55.256 EOF 00:19:55.256 )") 00:19:55.256 23:33:16 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:55.256 23:33:16 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:55.256 23:33:16 -- nvmf/common.sh@542 -- # cat 00:19:55.256 23:33:16 -- nvmf/common.sh@520 -- # config=() 00:19:55.256 23:33:16 -- nvmf/common.sh@520 -- # local subsystem config 00:19:55.256 23:33:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:55.256 23:33:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:55.256 { 00:19:55.256 "params": { 00:19:55.256 "name": "Nvme$subsystem", 00:19:55.256 "trtype": "$TEST_TRANSPORT", 00:19:55.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.256 "adrfam": "ipv4", 00:19:55.256 "trsvcid": "$NVMF_PORT", 00:19:55.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.256 "hdgst": ${hdgst:-false}, 00:19:55.256 "ddgst": ${ddgst:-false} 00:19:55.256 }, 00:19:55.256 "method": "bdev_nvme_attach_controller" 00:19:55.256 } 00:19:55.256 EOF 00:19:55.256 )") 00:19:55.256 23:33:16 -- nvmf/common.sh@542 -- # cat 00:19:55.256 23:33:16 -- target/bdev_io_wait.sh@37 -- # wait 255892 00:19:55.256 23:33:16 -- nvmf/common.sh@542 -- # cat 00:19:55.256 23:33:16 -- nvmf/common.sh@544 -- # jq . 00:19:55.256 23:33:16 -- nvmf/common.sh@544 -- # jq . 00:19:55.256 23:33:16 -- nvmf/common.sh@544 -- # jq . 00:19:55.256 23:33:16 -- nvmf/common.sh@545 -- # IFS=, 00:19:55.256 23:33:16 -- nvmf/common.sh@545 -- # IFS=, 00:19:55.256 23:33:16 -- nvmf/common.sh@544 -- # jq . 00:19:55.256 23:33:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:55.256 "params": { 00:19:55.256 "name": "Nvme1", 00:19:55.256 "trtype": "tcp", 00:19:55.256 "traddr": "10.0.0.2", 00:19:55.256 "adrfam": "ipv4", 00:19:55.256 "trsvcid": "4420", 00:19:55.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.256 "hdgst": false, 00:19:55.256 "ddgst": false 00:19:55.256 }, 00:19:55.256 "method": "bdev_nvme_attach_controller" 00:19:55.256 }' 00:19:55.256 23:33:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:55.256 "params": { 00:19:55.256 "name": "Nvme1", 00:19:55.256 "trtype": "tcp", 00:19:55.256 "traddr": "10.0.0.2", 00:19:55.256 "adrfam": "ipv4", 00:19:55.256 "trsvcid": "4420", 00:19:55.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.256 "hdgst": false, 00:19:55.256 "ddgst": false 00:19:55.256 }, 00:19:55.256 "method": "bdev_nvme_attach_controller" 00:19:55.256 }' 00:19:55.256 23:33:16 -- nvmf/common.sh@545 -- # IFS=, 00:19:55.256 23:33:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:55.256 "params": { 00:19:55.256 "name": "Nvme1", 00:19:55.256 "trtype": "tcp", 00:19:55.256 "traddr": "10.0.0.2", 00:19:55.256 "adrfam": "ipv4", 00:19:55.256 "trsvcid": "4420", 00:19:55.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.256 "hdgst": false, 00:19:55.256 "ddgst": false 00:19:55.256 }, 00:19:55.256 "method": "bdev_nvme_attach_controller" 00:19:55.256 }' 00:19:55.256 23:33:16 -- nvmf/common.sh@545 -- # IFS=, 00:19:55.256 23:33:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:55.256 "params": { 00:19:55.256 "name": "Nvme1", 00:19:55.256 "trtype": "tcp", 00:19:55.256 "traddr": "10.0.0.2", 00:19:55.256 "adrfam": "ipv4", 00:19:55.256 "trsvcid": "4420", 00:19:55.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.256 "hdgst": false, 00:19:55.256 "ddgst": false 00:19:55.256 }, 00:19:55.256 "method": "bdev_nvme_attach_controller" 00:19:55.256 }' 00:19:55.256 [2024-07-11 23:33:16.074912] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:55.256 [2024-07-11 23:33:16.074993] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:55.256 [2024-07-11 23:33:16.076600] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:55.256 [2024-07-11 23:33:16.076603] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:55.256 [2024-07-11 23:33:16.076602] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:55.256 [2024-07-11 23:33:16.076703] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-11 23:33:16.076704] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-11 23:33:16.076704] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:55.256 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:55.256 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:55.256 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.515 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.515 [2024-07-11 23:33:16.239338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.515 [2024-07-11 23:33:16.306372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:55.515 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.515 [2024-07-11 23:33:16.350789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.515 [2024-07-11 23:33:16.431963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:55.515 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.773 [2024-07-11 23:33:16.474282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.774 [2024-07-11 23:33:16.551638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:55.774 [2024-07-11 23:33:16.584451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.774 [2024-07-11 23:33:16.657318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:56.032 Running I/O for 1 seconds... 00:19:56.032 Running I/O for 1 seconds... 00:19:56.032 Running I/O for 1 seconds... 00:19:56.032 Running I/O for 1 seconds... 00:19:56.967 00:19:56.967 Latency(us) 00:19:56.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.967 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:56.967 Nvme1n1 : 1.00 201246.80 786.12 0.00 0.00 633.44 251.83 819.20 00:19:56.967 =================================================================================================================== 00:19:56.967 Total : 201246.80 786.12 0.00 0.00 633.44 251.83 819.20 00:19:56.967 00:19:56.967 Latency(us) 00:19:56.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.967 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:56.967 Nvme1n1 : 1.02 6649.47 25.97 0.00 0.00 19015.44 8689.59 28156.21 00:19:56.967 =================================================================================================================== 00:19:56.967 Total : 6649.47 25.97 0.00 0.00 19015.44 8689.59 28156.21 00:19:56.967 00:19:56.967 Latency(us) 00:19:56.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.967 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:56.967 Nvme1n1 : 1.01 6598.59 25.78 0.00 0.00 19339.57 5339.97 41943.04 00:19:56.967 =================================================================================================================== 00:19:56.967 Total : 6598.59 25.78 0.00 0.00 19339.57 5339.97 41943.04 00:19:56.967 00:19:56.967 Latency(us) 00:19:56.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.967 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:56.967 Nvme1n1 : 1.01 9871.10 38.56 0.00 0.00 12908.88 8592.50 23787.14 00:19:56.967 =================================================================================================================== 00:19:56.967 Total : 9871.10 38.56 0.00 0.00 12908.88 8592.50 23787.14 00:19:57.225 23:33:18 -- target/bdev_io_wait.sh@38 -- # wait 255893 00:19:57.483 23:33:18 -- target/bdev_io_wait.sh@39 -- # wait 255896 00:19:57.483 23:33:18 -- target/bdev_io_wait.sh@40 -- # wait 255898 00:19:57.483 23:33:18 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.483 23:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.483 23:33:18 -- common/autotest_common.sh@10 -- # set +x 00:19:57.483 23:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.483 23:33:18 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:57.483 23:33:18 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:57.483 23:33:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:57.483 23:33:18 -- nvmf/common.sh@116 -- # sync 00:19:57.483 23:33:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:57.483 23:33:18 -- nvmf/common.sh@119 -- # set +e 00:19:57.483 23:33:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:57.483 23:33:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:57.483 rmmod nvme_tcp 00:19:57.483 rmmod nvme_fabrics 00:19:57.483 rmmod nvme_keyring 00:19:57.483 23:33:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:57.483 23:33:18 -- nvmf/common.sh@123 -- # set -e 00:19:57.483 23:33:18 -- nvmf/common.sh@124 -- # return 0 00:19:57.483 23:33:18 -- nvmf/common.sh@477 -- # '[' -n 255869 ']' 00:19:57.483 23:33:18 -- nvmf/common.sh@478 -- # killprocess 255869 00:19:57.483 23:33:18 -- common/autotest_common.sh@926 -- # '[' -z 255869 ']' 00:19:57.483 23:33:18 -- common/autotest_common.sh@930 -- # kill -0 255869 00:19:57.483 23:33:18 -- common/autotest_common.sh@931 -- # uname 00:19:57.483 23:33:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:57.483 23:33:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 255869 00:19:57.483 23:33:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:57.483 23:33:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:57.483 23:33:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 255869' 00:19:57.483 killing process with pid 255869 00:19:57.483 23:33:18 -- common/autotest_common.sh@945 -- # kill 255869 00:19:57.483 23:33:18 -- common/autotest_common.sh@950 -- # wait 255869 00:19:57.743 23:33:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:57.743 23:33:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:57.743 23:33:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:57.743 23:33:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.743 23:33:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:57.743 23:33:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.743 23:33:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.743 23:33:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.282 23:33:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:00.282 00:20:00.282 real 0m8.057s 00:20:00.282 user 0m17.240s 00:20:00.282 sys 0m4.244s 00:20:00.282 23:33:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.282 23:33:20 -- common/autotest_common.sh@10 -- # set +x 00:20:00.282 ************************************ 00:20:00.282 END TEST nvmf_bdev_io_wait 00:20:00.282 ************************************ 00:20:00.282 23:33:20 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:00.282 23:33:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:00.282 23:33:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:00.282 23:33:20 -- common/autotest_common.sh@10 -- # set +x 00:20:00.282 ************************************ 00:20:00.282 START TEST nvmf_queue_depth 00:20:00.282 ************************************ 00:20:00.282 23:33:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:00.282 * Looking for test storage... 00:20:00.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:00.282 23:33:20 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.282 23:33:20 -- nvmf/common.sh@7 -- # uname -s 00:20:00.282 23:33:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.282 23:33:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.282 23:33:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.282 23:33:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.282 23:33:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.282 23:33:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.282 23:33:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.282 23:33:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.283 23:33:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.283 23:33:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.283 23:33:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:00.283 23:33:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:00.283 23:33:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.283 23:33:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.283 23:33:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.283 23:33:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.283 23:33:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.283 23:33:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.283 23:33:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.283 23:33:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.283 23:33:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.283 23:33:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.283 23:33:20 -- paths/export.sh@5 -- # export PATH 00:20:00.283 23:33:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.283 23:33:20 -- nvmf/common.sh@46 -- # : 0 00:20:00.283 23:33:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:00.283 23:33:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:00.283 23:33:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:00.283 23:33:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.283 23:33:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.283 23:33:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:00.283 23:33:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:00.283 23:33:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:00.283 23:33:20 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:20:00.283 23:33:20 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:20:00.283 23:33:20 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.283 23:33:20 -- target/queue_depth.sh@19 -- # nvmftestinit 00:20:00.283 23:33:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:00.283 23:33:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.283 23:33:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:00.283 23:33:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:00.283 23:33:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:00.283 23:33:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.283 23:33:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.283 23:33:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.283 23:33:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:00.283 23:33:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:00.283 23:33:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:00.283 23:33:20 -- common/autotest_common.sh@10 -- # set +x 00:20:02.822 23:33:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:02.822 23:33:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:02.822 23:33:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:02.822 23:33:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:02.822 23:33:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:02.822 23:33:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:02.822 23:33:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:02.822 23:33:23 -- nvmf/common.sh@294 -- # net_devs=() 00:20:02.822 23:33:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:02.822 23:33:23 -- nvmf/common.sh@295 -- # e810=() 00:20:02.822 23:33:23 -- nvmf/common.sh@295 -- # local -ga e810 00:20:02.822 23:33:23 -- nvmf/common.sh@296 -- # x722=() 00:20:02.822 23:33:23 -- nvmf/common.sh@296 -- # local -ga x722 00:20:02.822 23:33:23 -- nvmf/common.sh@297 -- # mlx=() 00:20:02.822 23:33:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:02.822 23:33:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.822 23:33:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.822 23:33:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.822 23:33:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.822 23:33:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.822 23:33:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.822 23:33:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.822 23:33:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.822 23:33:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.822 23:33:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.822 23:33:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.822 23:33:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:02.822 23:33:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:02.822 23:33:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:02.822 23:33:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:02.822 23:33:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:02.822 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:02.822 23:33:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:02.822 23:33:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:02.822 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:02.822 23:33:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:02.822 23:33:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:02.822 23:33:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.822 23:33:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:02.822 23:33:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.822 23:33:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:02.822 Found net devices under 0000:84:00.0: cvl_0_0 00:20:02.822 23:33:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.822 23:33:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:02.822 23:33:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.822 23:33:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:02.822 23:33:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.822 23:33:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:02.822 Found net devices under 0000:84:00.1: cvl_0_1 00:20:02.822 23:33:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.822 23:33:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:02.822 23:33:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:02.822 23:33:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:02.822 23:33:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:02.822 23:33:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.822 23:33:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.822 23:33:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.822 23:33:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:02.822 23:33:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.822 23:33:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.822 23:33:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:02.822 23:33:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.822 23:33:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.822 23:33:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:02.822 23:33:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:02.822 23:33:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.822 23:33:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.822 23:33:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.822 23:33:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.822 23:33:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:02.823 23:33:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.823 23:33:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.823 23:33:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.823 23:33:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:02.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:20:02.823 00:20:02.823 --- 10.0.0.2 ping statistics --- 00:20:02.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.823 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:20:02.823 23:33:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:20:02.823 00:20:02.823 --- 10.0.0.1 ping statistics --- 00:20:02.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.823 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:20:02.823 23:33:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.823 23:33:23 -- nvmf/common.sh@410 -- # return 0 00:20:02.823 23:33:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:02.823 23:33:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.823 23:33:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:02.823 23:33:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:02.823 23:33:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.823 23:33:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:02.823 23:33:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:02.823 23:33:23 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:20:02.823 23:33:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:02.823 23:33:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:02.823 23:33:23 -- common/autotest_common.sh@10 -- # set +x 00:20:02.823 23:33:23 -- nvmf/common.sh@469 -- # nvmfpid=258176 00:20:02.823 23:33:23 -- nvmf/common.sh@470 -- # waitforlisten 258176 00:20:02.823 23:33:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:02.823 23:33:23 -- common/autotest_common.sh@819 -- # '[' -z 258176 ']' 00:20:02.823 23:33:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.823 23:33:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:02.823 23:33:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.823 23:33:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:02.823 23:33:23 -- common/autotest_common.sh@10 -- # set +x 00:20:02.823 [2024-07-11 23:33:23.494812] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:02.823 [2024-07-11 23:33:23.494920] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.823 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.823 [2024-07-11 23:33:23.578956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.823 [2024-07-11 23:33:23.684030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:02.823 [2024-07-11 23:33:23.684238] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.823 [2024-07-11 23:33:23.684263] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.823 [2024-07-11 23:33:23.684278] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.823 [2024-07-11 23:33:23.684309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.758 23:33:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:03.758 23:33:24 -- common/autotest_common.sh@852 -- # return 0 00:20:03.758 23:33:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:03.758 23:33:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:03.758 23:33:24 -- common/autotest_common.sh@10 -- # set +x 00:20:03.758 23:33:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.758 23:33:24 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.758 23:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.758 23:33:24 -- common/autotest_common.sh@10 -- # set +x 00:20:03.758 [2024-07-11 23:33:24.591095] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.758 23:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.758 23:33:24 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:03.758 23:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.758 23:33:24 -- common/autotest_common.sh@10 -- # set +x 00:20:03.758 Malloc0 00:20:03.758 23:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.758 23:33:24 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:03.758 23:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.758 23:33:24 -- common/autotest_common.sh@10 -- # set +x 00:20:03.758 23:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.758 23:33:24 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:03.758 23:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.758 23:33:24 -- common/autotest_common.sh@10 -- # set +x 00:20:03.758 23:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.758 23:33:24 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.758 23:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.758 23:33:24 -- common/autotest_common.sh@10 -- # set +x 00:20:03.758 [2024-07-11 23:33:24.658392] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.758 23:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.758 23:33:24 -- target/queue_depth.sh@30 -- # bdevperf_pid=258406 00:20:03.758 23:33:24 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:20:03.758 23:33:24 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:03.758 23:33:24 -- target/queue_depth.sh@33 -- # waitforlisten 258406 /var/tmp/bdevperf.sock 00:20:03.758 23:33:24 -- common/autotest_common.sh@819 -- # '[' -z 258406 ']' 00:20:03.758 23:33:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.758 23:33:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:03.758 23:33:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.758 23:33:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:03.758 23:33:24 -- common/autotest_common.sh@10 -- # set +x 00:20:04.016 [2024-07-11 23:33:24.715119] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:04.016 [2024-07-11 23:33:24.715235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258406 ] 00:20:04.016 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.016 [2024-07-11 23:33:24.790962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.016 [2024-07-11 23:33:24.882169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.385 23:33:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:05.385 23:33:25 -- common/autotest_common.sh@852 -- # return 0 00:20:05.385 23:33:25 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:05.385 23:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.385 23:33:25 -- common/autotest_common.sh@10 -- # set +x 00:20:05.385 NVMe0n1 00:20:05.385 23:33:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.385 23:33:26 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:05.643 Running I/O for 10 seconds... 00:20:15.652 00:20:15.652 Latency(us) 00:20:15.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.652 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:20:15.652 Verification LBA range: start 0x0 length 0x4000 00:20:15.652 NVMe0n1 : 10.07 12141.84 47.43 0.00 0.00 83994.38 15243.19 63302.92 00:20:15.652 =================================================================================================================== 00:20:15.652 Total : 12141.84 47.43 0.00 0.00 83994.38 15243.19 63302.92 00:20:15.652 0 00:20:15.652 23:33:36 -- target/queue_depth.sh@39 -- # killprocess 258406 00:20:15.652 23:33:36 -- common/autotest_common.sh@926 -- # '[' -z 258406 ']' 00:20:15.652 23:33:36 -- common/autotest_common.sh@930 -- # kill -0 258406 00:20:15.652 23:33:36 -- common/autotest_common.sh@931 -- # uname 00:20:15.652 23:33:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:15.653 23:33:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 258406 00:20:15.653 23:33:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:15.653 23:33:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:15.653 23:33:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 258406' 00:20:15.653 killing process with pid 258406 00:20:15.653 23:33:36 -- common/autotest_common.sh@945 -- # kill 258406 00:20:15.653 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.653 00:20:15.653 Latency(us) 00:20:15.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.653 =================================================================================================================== 00:20:15.653 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.653 23:33:36 -- common/autotest_common.sh@950 -- # wait 258406 00:20:15.911 23:33:36 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:15.911 23:33:36 -- target/queue_depth.sh@43 -- # nvmftestfini 00:20:15.911 23:33:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:15.911 23:33:36 -- nvmf/common.sh@116 -- # sync 00:20:15.911 23:33:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:15.911 23:33:36 -- nvmf/common.sh@119 -- # set +e 00:20:15.911 23:33:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:15.911 23:33:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:15.911 rmmod nvme_tcp 00:20:15.911 rmmod nvme_fabrics 00:20:15.911 rmmod nvme_keyring 00:20:15.911 23:33:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:15.911 23:33:36 -- nvmf/common.sh@123 -- # set -e 00:20:15.911 23:33:36 -- nvmf/common.sh@124 -- # return 0 00:20:15.911 23:33:36 -- nvmf/common.sh@477 -- # '[' -n 258176 ']' 00:20:15.911 23:33:36 -- nvmf/common.sh@478 -- # killprocess 258176 00:20:15.911 23:33:36 -- common/autotest_common.sh@926 -- # '[' -z 258176 ']' 00:20:15.911 23:33:36 -- common/autotest_common.sh@930 -- # kill -0 258176 00:20:15.911 23:33:36 -- common/autotest_common.sh@931 -- # uname 00:20:15.911 23:33:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:15.911 23:33:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 258176 00:20:15.911 23:33:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:15.911 23:33:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:15.911 23:33:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 258176' 00:20:15.911 killing process with pid 258176 00:20:15.911 23:33:36 -- common/autotest_common.sh@945 -- # kill 258176 00:20:15.911 23:33:36 -- common/autotest_common.sh@950 -- # wait 258176 00:20:16.482 23:33:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:16.482 23:33:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:16.482 23:33:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:16.482 23:33:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:16.482 23:33:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:16.482 23:33:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.482 23:33:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.482 23:33:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.389 23:33:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:18.389 00:20:18.389 real 0m18.538s 00:20:18.389 user 0m26.169s 00:20:18.389 sys 0m4.080s 00:20:18.389 23:33:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:18.389 23:33:39 -- common/autotest_common.sh@10 -- # set +x 00:20:18.389 ************************************ 00:20:18.389 END TEST nvmf_queue_depth 00:20:18.389 ************************************ 00:20:18.389 23:33:39 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:18.389 23:33:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:18.389 23:33:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:18.389 23:33:39 -- common/autotest_common.sh@10 -- # set +x 00:20:18.389 ************************************ 00:20:18.389 START TEST nvmf_multipath 00:20:18.389 ************************************ 00:20:18.389 23:33:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:18.389 * Looking for test storage... 00:20:18.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:18.389 23:33:39 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:18.389 23:33:39 -- nvmf/common.sh@7 -- # uname -s 00:20:18.389 23:33:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.389 23:33:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.389 23:33:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.389 23:33:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.389 23:33:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.389 23:33:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.389 23:33:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.389 23:33:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.389 23:33:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.389 23:33:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.389 23:33:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:18.389 23:33:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:18.389 23:33:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.389 23:33:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.389 23:33:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:18.389 23:33:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:18.389 23:33:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.389 23:33:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.389 23:33:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.389 23:33:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.389 23:33:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.389 23:33:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.389 23:33:39 -- paths/export.sh@5 -- # export PATH 00:20:18.389 23:33:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.389 23:33:39 -- nvmf/common.sh@46 -- # : 0 00:20:18.389 23:33:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:18.389 23:33:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:18.389 23:33:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:18.389 23:33:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.389 23:33:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.389 23:33:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:18.389 23:33:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:18.389 23:33:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:18.389 23:33:39 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:18.389 23:33:39 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:18.389 23:33:39 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:18.389 23:33:39 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:18.389 23:33:39 -- target/multipath.sh@43 -- # nvmftestinit 00:20:18.389 23:33:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:18.389 23:33:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.389 23:33:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:18.389 23:33:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:18.389 23:33:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:18.389 23:33:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.389 23:33:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.389 23:33:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.389 23:33:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:18.389 23:33:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:18.389 23:33:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:18.389 23:33:39 -- common/autotest_common.sh@10 -- # set +x 00:20:21.682 23:33:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:21.682 23:33:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:21.682 23:33:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:21.682 23:33:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:21.682 23:33:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:21.682 23:33:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:21.682 23:33:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:21.682 23:33:42 -- nvmf/common.sh@294 -- # net_devs=() 00:20:21.682 23:33:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:21.682 23:33:42 -- nvmf/common.sh@295 -- # e810=() 00:20:21.682 23:33:42 -- nvmf/common.sh@295 -- # local -ga e810 00:20:21.682 23:33:42 -- nvmf/common.sh@296 -- # x722=() 00:20:21.682 23:33:42 -- nvmf/common.sh@296 -- # local -ga x722 00:20:21.682 23:33:42 -- nvmf/common.sh@297 -- # mlx=() 00:20:21.682 23:33:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:21.682 23:33:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.682 23:33:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.682 23:33:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.682 23:33:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.682 23:33:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.682 23:33:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.682 23:33:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.682 23:33:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.682 23:33:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.682 23:33:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.682 23:33:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.682 23:33:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:21.682 23:33:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:21.682 23:33:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:21.682 23:33:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:21.682 23:33:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:21.682 23:33:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:21.682 23:33:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:21.682 23:33:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:21.682 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:21.682 23:33:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:21.682 23:33:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:21.682 23:33:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:21.683 23:33:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:21.683 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:21.683 23:33:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:21.683 23:33:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:21.683 23:33:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.683 23:33:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:21.683 23:33:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.683 23:33:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:21.683 Found net devices under 0000:84:00.0: cvl_0_0 00:20:21.683 23:33:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.683 23:33:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:21.683 23:33:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.683 23:33:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:21.683 23:33:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.683 23:33:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:21.683 Found net devices under 0000:84:00.1: cvl_0_1 00:20:21.683 23:33:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.683 23:33:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:21.683 23:33:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:21.683 23:33:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:21.683 23:33:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.683 23:33:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.683 23:33:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.683 23:33:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:21.683 23:33:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.683 23:33:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.683 23:33:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:21.683 23:33:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.683 23:33:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.683 23:33:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:21.683 23:33:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:21.683 23:33:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.683 23:33:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.683 23:33:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.683 23:33:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.683 23:33:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:21.683 23:33:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.683 23:33:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.683 23:33:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.683 23:33:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:21.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:20:21.683 00:20:21.683 --- 10.0.0.2 ping statistics --- 00:20:21.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.683 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:20:21.683 23:33:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:20:21.683 00:20:21.683 --- 10.0.0.1 ping statistics --- 00:20:21.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.683 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:21.683 23:33:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.683 23:33:42 -- nvmf/common.sh@410 -- # return 0 00:20:21.683 23:33:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:21.683 23:33:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.683 23:33:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.683 23:33:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:21.683 23:33:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:21.683 23:33:42 -- target/multipath.sh@45 -- # '[' -z ']' 00:20:21.683 23:33:42 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:20:21.683 only one NIC for nvmf test 00:20:21.683 23:33:42 -- target/multipath.sh@47 -- # nvmftestfini 00:20:21.683 23:33:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:21.683 23:33:42 -- nvmf/common.sh@116 -- # sync 00:20:21.683 23:33:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:21.683 23:33:42 -- nvmf/common.sh@119 -- # set +e 00:20:21.683 23:33:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:21.683 23:33:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:21.683 rmmod nvme_tcp 00:20:21.683 rmmod nvme_fabrics 00:20:21.683 rmmod nvme_keyring 00:20:21.683 23:33:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:21.683 23:33:42 -- nvmf/common.sh@123 -- # set -e 00:20:21.683 23:33:42 -- nvmf/common.sh@124 -- # return 0 00:20:21.683 23:33:42 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:20:21.683 23:33:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:21.683 23:33:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:21.683 23:33:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.683 23:33:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:21.683 23:33:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.683 23:33:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.683 23:33:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.588 23:33:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:23.588 23:33:44 -- target/multipath.sh@48 -- # exit 0 00:20:23.588 23:33:44 -- target/multipath.sh@1 -- # nvmftestfini 00:20:23.588 23:33:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:23.588 23:33:44 -- nvmf/common.sh@116 -- # sync 00:20:23.588 23:33:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:23.588 23:33:44 -- nvmf/common.sh@119 -- # set +e 00:20:23.588 23:33:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:23.588 23:33:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:23.588 23:33:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:23.588 23:33:44 -- nvmf/common.sh@123 -- # set -e 00:20:23.588 23:33:44 -- nvmf/common.sh@124 -- # return 0 00:20:23.588 23:33:44 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:20:23.588 23:33:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:23.588 23:33:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:23.588 23:33:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:23.588 23:33:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.588 23:33:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:23.588 23:33:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.588 23:33:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.588 23:33:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.588 23:33:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:23.588 00:20:23.588 real 0m5.169s 00:20:23.588 user 0m0.951s 00:20:23.588 sys 0m2.214s 00:20:23.588 23:33:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:23.588 23:33:44 -- common/autotest_common.sh@10 -- # set +x 00:20:23.588 ************************************ 00:20:23.588 END TEST nvmf_multipath 00:20:23.588 ************************************ 00:20:23.588 23:33:44 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:23.588 23:33:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:23.588 23:33:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:23.588 23:33:44 -- common/autotest_common.sh@10 -- # set +x 00:20:23.588 ************************************ 00:20:23.588 START TEST nvmf_zcopy 00:20:23.588 ************************************ 00:20:23.588 23:33:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:23.588 * Looking for test storage... 00:20:23.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:23.588 23:33:44 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.588 23:33:44 -- nvmf/common.sh@7 -- # uname -s 00:20:23.588 23:33:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.588 23:33:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.588 23:33:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.588 23:33:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.588 23:33:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.588 23:33:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.588 23:33:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.588 23:33:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.588 23:33:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.588 23:33:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.588 23:33:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:23.588 23:33:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:23.588 23:33:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.588 23:33:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.588 23:33:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.588 23:33:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.588 23:33:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.588 23:33:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.588 23:33:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.588 23:33:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.588 23:33:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.588 23:33:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.588 23:33:44 -- paths/export.sh@5 -- # export PATH 00:20:23.588 23:33:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.588 23:33:44 -- nvmf/common.sh@46 -- # : 0 00:20:23.588 23:33:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:23.588 23:33:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:23.588 23:33:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:23.588 23:33:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.588 23:33:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.588 23:33:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:23.588 23:33:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:23.588 23:33:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:23.588 23:33:44 -- target/zcopy.sh@12 -- # nvmftestinit 00:20:23.588 23:33:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:23.588 23:33:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.588 23:33:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:23.588 23:33:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:23.588 23:33:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:23.588 23:33:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.588 23:33:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.588 23:33:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.588 23:33:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:23.588 23:33:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:23.588 23:33:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:23.588 23:33:44 -- common/autotest_common.sh@10 -- # set +x 00:20:26.121 23:33:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:26.121 23:33:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:26.121 23:33:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:26.121 23:33:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:26.121 23:33:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:26.121 23:33:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:26.121 23:33:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:26.121 23:33:46 -- nvmf/common.sh@294 -- # net_devs=() 00:20:26.121 23:33:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:26.122 23:33:46 -- nvmf/common.sh@295 -- # e810=() 00:20:26.122 23:33:46 -- nvmf/common.sh@295 -- # local -ga e810 00:20:26.122 23:33:46 -- nvmf/common.sh@296 -- # x722=() 00:20:26.122 23:33:46 -- nvmf/common.sh@296 -- # local -ga x722 00:20:26.122 23:33:46 -- nvmf/common.sh@297 -- # mlx=() 00:20:26.122 23:33:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:26.122 23:33:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.122 23:33:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.122 23:33:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.122 23:33:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.122 23:33:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.122 23:33:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.122 23:33:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.122 23:33:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.122 23:33:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.122 23:33:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.122 23:33:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.122 23:33:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:26.122 23:33:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:26.122 23:33:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:26.122 23:33:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:26.122 23:33:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:26.122 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:26.122 23:33:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:26.122 23:33:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:26.122 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:26.122 23:33:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:26.122 23:33:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:26.122 23:33:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.122 23:33:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:26.122 23:33:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.122 23:33:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:26.122 Found net devices under 0000:84:00.0: cvl_0_0 00:20:26.122 23:33:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.122 23:33:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:26.122 23:33:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.122 23:33:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:26.122 23:33:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.122 23:33:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:26.122 Found net devices under 0000:84:00.1: cvl_0_1 00:20:26.122 23:33:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.122 23:33:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:26.122 23:33:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:26.122 23:33:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:26.122 23:33:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:26.122 23:33:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.122 23:33:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.122 23:33:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.122 23:33:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:26.122 23:33:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.122 23:33:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.122 23:33:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:26.122 23:33:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.122 23:33:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.122 23:33:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:26.122 23:33:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:26.122 23:33:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.122 23:33:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.122 23:33:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.122 23:33:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.122 23:33:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:26.122 23:33:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.381 23:33:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.381 23:33:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.381 23:33:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:26.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:20:26.381 00:20:26.381 --- 10.0.0.2 ping statistics --- 00:20:26.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.381 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:20:26.381 23:33:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:20:26.381 00:20:26.381 --- 10.0.0.1 ping statistics --- 00:20:26.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.381 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:20:26.381 23:33:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.381 23:33:47 -- nvmf/common.sh@410 -- # return 0 00:20:26.381 23:33:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:26.381 23:33:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.381 23:33:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:26.381 23:33:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:26.381 23:33:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.381 23:33:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:26.381 23:33:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:26.381 23:33:47 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:20:26.381 23:33:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:26.381 23:33:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:26.381 23:33:47 -- common/autotest_common.sh@10 -- # set +x 00:20:26.381 23:33:47 -- nvmf/common.sh@469 -- # nvmfpid=263848 00:20:26.381 23:33:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:26.381 23:33:47 -- nvmf/common.sh@470 -- # waitforlisten 263848 00:20:26.381 23:33:47 -- common/autotest_common.sh@819 -- # '[' -z 263848 ']' 00:20:26.381 23:33:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.381 23:33:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:26.381 23:33:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.381 23:33:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:26.381 23:33:47 -- common/autotest_common.sh@10 -- # set +x 00:20:26.381 [2024-07-11 23:33:47.246620] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:26.381 [2024-07-11 23:33:47.246798] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.381 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.639 [2024-07-11 23:33:47.366479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.639 [2024-07-11 23:33:47.472288] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:26.639 [2024-07-11 23:33:47.472483] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.639 [2024-07-11 23:33:47.472508] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.639 [2024-07-11 23:33:47.472526] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.639 [2024-07-11 23:33:47.472563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.573 23:33:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:27.573 23:33:48 -- common/autotest_common.sh@852 -- # return 0 00:20:27.573 23:33:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:27.573 23:33:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:27.573 23:33:48 -- common/autotest_common.sh@10 -- # set +x 00:20:27.573 23:33:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.573 23:33:48 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:20:27.573 23:33:48 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:20:27.573 23:33:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.573 23:33:48 -- common/autotest_common.sh@10 -- # set +x 00:20:27.573 [2024-07-11 23:33:48.413385] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.573 23:33:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.573 23:33:48 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:27.573 23:33:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.573 23:33:48 -- common/autotest_common.sh@10 -- # set +x 00:20:27.573 23:33:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.573 23:33:48 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.573 23:33:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.573 23:33:48 -- common/autotest_common.sh@10 -- # set +x 00:20:27.573 [2024-07-11 23:33:48.429629] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.573 23:33:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.573 23:33:48 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:27.573 23:33:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.573 23:33:48 -- common/autotest_common.sh@10 -- # set +x 00:20:27.573 23:33:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.573 23:33:48 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:20:27.573 23:33:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.573 23:33:48 -- common/autotest_common.sh@10 -- # set +x 00:20:27.573 malloc0 00:20:27.573 23:33:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.573 23:33:48 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:27.573 23:33:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.573 23:33:48 -- common/autotest_common.sh@10 -- # set +x 00:20:27.573 23:33:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.573 23:33:48 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:20:27.573 23:33:48 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:20:27.573 23:33:48 -- nvmf/common.sh@520 -- # config=() 00:20:27.573 23:33:48 -- nvmf/common.sh@520 -- # local subsystem config 00:20:27.573 23:33:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:27.573 23:33:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:27.573 { 00:20:27.573 "params": { 00:20:27.573 "name": "Nvme$subsystem", 00:20:27.573 "trtype": "$TEST_TRANSPORT", 00:20:27.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.573 "adrfam": "ipv4", 00:20:27.573 "trsvcid": "$NVMF_PORT", 00:20:27.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.573 "hdgst": ${hdgst:-false}, 00:20:27.573 "ddgst": ${ddgst:-false} 00:20:27.573 }, 00:20:27.573 "method": "bdev_nvme_attach_controller" 00:20:27.573 } 00:20:27.573 EOF 00:20:27.573 )") 00:20:27.573 23:33:48 -- nvmf/common.sh@542 -- # cat 00:20:27.573 23:33:48 -- nvmf/common.sh@544 -- # jq . 00:20:27.573 23:33:48 -- nvmf/common.sh@545 -- # IFS=, 00:20:27.573 23:33:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:27.573 "params": { 00:20:27.573 "name": "Nvme1", 00:20:27.573 "trtype": "tcp", 00:20:27.573 "traddr": "10.0.0.2", 00:20:27.573 "adrfam": "ipv4", 00:20:27.573 "trsvcid": "4420", 00:20:27.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.573 "hdgst": false, 00:20:27.573 "ddgst": false 00:20:27.573 }, 00:20:27.573 "method": "bdev_nvme_attach_controller" 00:20:27.573 }' 00:20:27.573 [2024-07-11 23:33:48.517281] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:27.573 [2024-07-11 23:33:48.517381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264009 ] 00:20:27.831 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.831 [2024-07-11 23:33:48.589081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.831 [2024-07-11 23:33:48.681100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.090 Running I/O for 10 seconds... 00:20:38.064 00:20:38.064 Latency(us) 00:20:38.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.064 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:38.064 Verification LBA range: start 0x0 length 0x1000 00:20:38.064 Nvme1n1 : 10.01 8443.30 65.96 0.00 0.00 15123.48 1820.44 24175.50 00:20:38.064 =================================================================================================================== 00:20:38.064 Total : 8443.30 65.96 0.00 0.00 15123.48 1820.44 24175.50 00:20:38.323 23:33:59 -- target/zcopy.sh@39 -- # perfpid=265236 00:20:38.323 23:33:59 -- target/zcopy.sh@41 -- # xtrace_disable 00:20:38.323 23:33:59 -- common/autotest_common.sh@10 -- # set +x 00:20:38.323 23:33:59 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:38.323 23:33:59 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:38.323 23:33:59 -- nvmf/common.sh@520 -- # config=() 00:20:38.323 23:33:59 -- nvmf/common.sh@520 -- # local subsystem config 00:20:38.323 23:33:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:38.323 23:33:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:38.323 { 00:20:38.323 "params": { 00:20:38.323 "name": "Nvme$subsystem", 00:20:38.323 "trtype": "$TEST_TRANSPORT", 00:20:38.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.323 "adrfam": "ipv4", 00:20:38.323 "trsvcid": "$NVMF_PORT", 00:20:38.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.323 "hdgst": ${hdgst:-false}, 00:20:38.323 "ddgst": ${ddgst:-false} 00:20:38.323 }, 00:20:38.323 "method": "bdev_nvme_attach_controller" 00:20:38.323 } 00:20:38.323 EOF 00:20:38.323 )") 00:20:38.323 [2024-07-11 23:33:59.181688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.323 [2024-07-11 23:33:59.181742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.323 23:33:59 -- nvmf/common.sh@542 -- # cat 00:20:38.323 23:33:59 -- nvmf/common.sh@544 -- # jq . 00:20:38.323 23:33:59 -- nvmf/common.sh@545 -- # IFS=, 00:20:38.323 23:33:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:38.323 "params": { 00:20:38.323 "name": "Nvme1", 00:20:38.323 "trtype": "tcp", 00:20:38.323 "traddr": "10.0.0.2", 00:20:38.323 "adrfam": "ipv4", 00:20:38.323 "trsvcid": "4420", 00:20:38.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.323 "hdgst": false, 00:20:38.323 "ddgst": false 00:20:38.323 }, 00:20:38.323 "method": "bdev_nvme_attach_controller" 00:20:38.323 }' 00:20:38.323 [2024-07-11 23:33:59.189644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.323 [2024-07-11 23:33:59.189678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.323 [2024-07-11 23:33:59.197664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.323 [2024-07-11 23:33:59.197696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.323 [2024-07-11 23:33:59.205688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.323 [2024-07-11 23:33:59.205720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.323 [2024-07-11 23:33:59.213710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.323 [2024-07-11 23:33:59.213741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.323 [2024-07-11 23:33:59.221735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.323 [2024-07-11 23:33:59.221766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.323 [2024-07-11 23:33:59.226358] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:38.323 [2024-07-11 23:33:59.226449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid265236 ] 00:20:38.323 [2024-07-11 23:33:59.229757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.323 [2024-07-11 23:33:59.229788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.323 [2024-07-11 23:33:59.237780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.323 [2024-07-11 23:33:59.237811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.323 [2024-07-11 23:33:59.245805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.323 [2024-07-11 23:33:59.245835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.323 [2024-07-11 23:33:59.253830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.323 [2024-07-11 23:33:59.253860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.323 [2024-07-11 23:33:59.261851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.323 [2024-07-11 23:33:59.261881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.323 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.323 [2024-07-11 23:33:59.269872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.323 [2024-07-11 23:33:59.269902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.277878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.277904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.285897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.285922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.293918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.293942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.301940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.301964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.302209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.582 [2024-07-11 23:33:59.309995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.310031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.318013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.318049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.326006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.326031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.334026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.334052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.342048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.342073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.350076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.350104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.358115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.358157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.366131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.366170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.374145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.374182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.382164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.382189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.390184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.390209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.397293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.582 [2024-07-11 23:33:59.398207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.582 [2024-07-11 23:33:59.398232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.582 [2024-07-11 23:33:59.406225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.406250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.414264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.414296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.422291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.422325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.430313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.430351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.438331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.438368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.446362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.446400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.454381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.454431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.462395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.462443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.470399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.470435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.478452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.478488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.486462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.486497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.494481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.494512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.502492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.502518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.510520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.510552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.518542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.518571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.583 [2024-07-11 23:33:59.526565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.583 [2024-07-11 23:33:59.526593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.841 [2024-07-11 23:33:59.534593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.841 [2024-07-11 23:33:59.534631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.841 [2024-07-11 23:33:59.542616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.841 [2024-07-11 23:33:59.542644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.841 [2024-07-11 23:33:59.550634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.841 [2024-07-11 23:33:59.550662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.841 [2024-07-11 23:33:59.558656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.841 [2024-07-11 23:33:59.558681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.841 [2024-07-11 23:33:59.566679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.841 [2024-07-11 23:33:59.566706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.841 [2024-07-11 23:33:59.574703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.841 [2024-07-11 23:33:59.574728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.841 [2024-07-11 23:33:59.582727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.841 [2024-07-11 23:33:59.582752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.841 [2024-07-11 23:33:59.590754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.841 [2024-07-11 23:33:59.590782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.841 [2024-07-11 23:33:59.598773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.841 [2024-07-11 23:33:59.598800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.841 [2024-07-11 23:33:59.606799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.841 [2024-07-11 23:33:59.606824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.614822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.614847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.622847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.622872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.630874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.630899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.638895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.638921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.646918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.646943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.654943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.654967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.662965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.662991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.670992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.671024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.679011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.679037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.687036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.687060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.695073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.695103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 Running I/O for 5 seconds... 00:20:38.842 [2024-07-11 23:33:59.703090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.703117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.715916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.715948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.726336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.726368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.737786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.737818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.749564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.749595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.760664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.760695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.771892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.771923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:38.842 [2024-07-11 23:33:59.783275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:38.842 [2024-07-11 23:33:59.783307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.101 [2024-07-11 23:33:59.794366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.101 [2024-07-11 23:33:59.794397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.101 [2024-07-11 23:33:59.806296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.101 [2024-07-11 23:33:59.806327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.101 [2024-07-11 23:33:59.816755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.101 [2024-07-11 23:33:59.816786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.101 [2024-07-11 23:33:59.830428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.101 [2024-07-11 23:33:59.830459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.101 [2024-07-11 23:33:59.840720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.101 [2024-07-11 23:33:59.840751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.101 [2024-07-11 23:33:59.852189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.101 [2024-07-11 23:33:59.852220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.101 [2024-07-11 23:33:59.862939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.101 [2024-07-11 23:33:59.862971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.101 [2024-07-11 23:33:59.874007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.101 [2024-07-11 23:33:59.874048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.101 [2024-07-11 23:33:59.885319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.101 [2024-07-11 23:33:59.885350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.101 [2024-07-11 23:33:59.896692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.101 [2024-07-11 23:33:59.896724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.101 [2024-07-11 23:33:59.907868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.101 [2024-07-11 23:33:59.907899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.101 [2024-07-11 23:33:59.918205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.102 [2024-07-11 23:33:59.918236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.102 [2024-07-11 23:33:59.929397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.102 [2024-07-11 23:33:59.929428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.102 [2024-07-11 23:33:59.940705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.102 [2024-07-11 23:33:59.940737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.102 [2024-07-11 23:33:59.952363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.102 [2024-07-11 23:33:59.952395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.102 [2024-07-11 23:33:59.963532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.102 [2024-07-11 23:33:59.963563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.102 [2024-07-11 23:33:59.974558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.102 [2024-07-11 23:33:59.974589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.102 [2024-07-11 23:33:59.985861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.102 [2024-07-11 23:33:59.985892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.102 [2024-07-11 23:33:59.996708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.102 [2024-07-11 23:33:59.996739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.102 [2024-07-11 23:34:00.008239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.102 [2024-07-11 23:34:00.008269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.102 [2024-07-11 23:34:00.019590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.102 [2024-07-11 23:34:00.019623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.102 [2024-07-11 23:34:00.031119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.102 [2024-07-11 23:34:00.031163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.102 [2024-07-11 23:34:00.042102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.102 [2024-07-11 23:34:00.042134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.052990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.053021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.064255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.064286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.075424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.075455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.086366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.086408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.097995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.098027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.108942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.108973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.120041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.120072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.131489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.131520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.144350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.144380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.154068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.154098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.165413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.165444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.176433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.176464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.189909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.189940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.200395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.200426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.212169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.212200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.223307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.223339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.234025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.234056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.244850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.244881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.255677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.255708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.266385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.266416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.277175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.277206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.288326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.288357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.300218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.300249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.361 [2024-07-11 23:34:00.310741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.361 [2024-07-11 23:34:00.310773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.322486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.322517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.333695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.333726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.344713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.344744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.355437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.355468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.366292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.366323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.377291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.377323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.388534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.388565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.399262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.399294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.412500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.412532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.422627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.422658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.434402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.434433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.445313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.445343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.457866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.457897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.467321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.467351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.479541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.479571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.491162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.491193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.502235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.502266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.513237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.513268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.523956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.523986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.535158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.535188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.546224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.546256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.557387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.557418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.620 [2024-07-11 23:34:00.567999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.620 [2024-07-11 23:34:00.568030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.578959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.578989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.590341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.590372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.601435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.601466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.612608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.612641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.623839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.623870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.635278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.635310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.646695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.646727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.657783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.657815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.667995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.668026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.680235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.680266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.691319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.691351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.702546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.702577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.713864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.713895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.725103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.725134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.736557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.736588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.747903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.747934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.759369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.759399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.770850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.770881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.782232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.782263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.793297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.793328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.804164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.804195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.815019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.815050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:39.879 [2024-07-11 23:34:00.826411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:39.879 [2024-07-11 23:34:00.826442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.837629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.837660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.848337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.848368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.861510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.861541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.871132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.871179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.883105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.883136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.893854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.893885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.905567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.905598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.917237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.917268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.929066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.929097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.942386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.942417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.952249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.952281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.964392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.964423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.976057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.976087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.987598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.987630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:00.998914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:00.998944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:01.009883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:01.009914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:01.020620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:01.020651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:01.031705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:01.031736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:01.042670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:01.042701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:01.052859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:01.052890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:01.064581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:01.064612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:01.075438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:01.075468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.138 [2024-07-11 23:34:01.086332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.138 [2024-07-11 23:34:01.086364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.398 [2024-07-11 23:34:01.097602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.398 [2024-07-11 23:34:01.097634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.398 [2024-07-11 23:34:01.109062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.398 [2024-07-11 23:34:01.109094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.398 [2024-07-11 23:34:01.120620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.398 [2024-07-11 23:34:01.120650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.398 [2024-07-11 23:34:01.131696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.398 [2024-07-11 23:34:01.131726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.398 [2024-07-11 23:34:01.142939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.398 [2024-07-11 23:34:01.142982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.398 [2024-07-11 23:34:01.153769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.398 [2024-07-11 23:34:01.153800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.164210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.164240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.175773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.175804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.186404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.186434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.197391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.197421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.208586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.208617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.219832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.219863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.230851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.230888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.242275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.242306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.253185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.253216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.263956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.263987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.275327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.275363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.286241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.286272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.296743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.296775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.307987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.308018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.318874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.318905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.330080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.330111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.399 [2024-07-11 23:34:01.340507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.399 [2024-07-11 23:34:01.340538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.691 [2024-07-11 23:34:01.351744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.691 [2024-07-11 23:34:01.351782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.691 [2024-07-11 23:34:01.362663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.691 [2024-07-11 23:34:01.362694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.691 [2024-07-11 23:34:01.373840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.691 [2024-07-11 23:34:01.373871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.691 [2024-07-11 23:34:01.384060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.691 [2024-07-11 23:34:01.384091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.691 [2024-07-11 23:34:01.395577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.691 [2024-07-11 23:34:01.395609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.691 [2024-07-11 23:34:01.406738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.691 [2024-07-11 23:34:01.406769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.691 [2024-07-11 23:34:01.419785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.691 [2024-07-11 23:34:01.419816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.691 [2024-07-11 23:34:01.429872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.691 [2024-07-11 23:34:01.429915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.691 [2024-07-11 23:34:01.441665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.691 [2024-07-11 23:34:01.441696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.691 [2024-07-11 23:34:01.452966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.691 [2024-07-11 23:34:01.452997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.691 [2024-07-11 23:34:01.464339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.691 [2024-07-11 23:34:01.464371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.691 [2024-07-11 23:34:01.476229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.691 [2024-07-11 23:34:01.476260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.691 [2024-07-11 23:34:01.489912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.691 [2024-07-11 23:34:01.489942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.692 [2024-07-11 23:34:01.499954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.692 [2024-07-11 23:34:01.499985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.692 [2024-07-11 23:34:01.511832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.692 [2024-07-11 23:34:01.511862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.692 [2024-07-11 23:34:01.523149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.692 [2024-07-11 23:34:01.523179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.692 [2024-07-11 23:34:01.535431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.692 [2024-07-11 23:34:01.535461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.692 [2024-07-11 23:34:01.545458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.692 [2024-07-11 23:34:01.545489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.692 [2024-07-11 23:34:01.557914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.692 [2024-07-11 23:34:01.557945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.692 [2024-07-11 23:34:01.568749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.692 [2024-07-11 23:34:01.568787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.692 [2024-07-11 23:34:01.580253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.692 [2024-07-11 23:34:01.580284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.692 [2024-07-11 23:34:01.591699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.692 [2024-07-11 23:34:01.591729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.692 [2024-07-11 23:34:01.602876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.692 [2024-07-11 23:34:01.602906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.692 [2024-07-11 23:34:01.614223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.692 [2024-07-11 23:34:01.614255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.625030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.625061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.636375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.636407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.647647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.647678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.658954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.658985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.669803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.669834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.680944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.680976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.692248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.692278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.703642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.703675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.714630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.714661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.727055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.727085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.736609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.736641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.748648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.748680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.759736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.759768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.770807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.770839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.781935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.781974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.793358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.793389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.804915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.804946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.816654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.816685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.828119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.828160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.839358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.839389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.850631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.850666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.862006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.862038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.872920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.872951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.884197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.884229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:40.950 [2024-07-11 23:34:01.894950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:40.950 [2024-07-11 23:34:01.894981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:01.905345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:01.905376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:01.916985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:01.917016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:01.928620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:01.928651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:01.939617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:01.939647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:01.950785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:01.950816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:01.963852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:01.963882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:01.973920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:01.973952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:01.985732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:01.985762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:01.996759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:01.996790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:02.009546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:02.009576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:02.019422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:02.019453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:02.031239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:02.031270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:02.042314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:02.042345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:02.053046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:02.053076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:02.063752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:02.063783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:02.074809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:02.074840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:02.086086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:02.086117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:02.097386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:02.097417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:02.108514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:02.108545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:02.119717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:02.119748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:02.130707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:02.130737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:02.141530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:02.141561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.208 [2024-07-11 23:34:02.152204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.208 [2024-07-11 23:34:02.152235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.163148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.163178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.174700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.174730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.185848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.185879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.196841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.196872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.207571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.207602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.218671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.218702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.229806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.229836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.240911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.240942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.251979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.252010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.263068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.263098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.274288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.274319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.285743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.285774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.296676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.296706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.307411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.307442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.318532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.318563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.329078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.329109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.339858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.339890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.350694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.350725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.362221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.362252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.373528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.373558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.384882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.384913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.467 [2024-07-11 23:34:02.396064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.467 [2024-07-11 23:34:02.396096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.468 [2024-07-11 23:34:02.407547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.468 [2024-07-11 23:34:02.407578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.725 [2024-07-11 23:34:02.418836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.725 [2024-07-11 23:34:02.418867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.725 [2024-07-11 23:34:02.430347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.725 [2024-07-11 23:34:02.430378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.725 [2024-07-11 23:34:02.441908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.725 [2024-07-11 23:34:02.441938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.725 [2024-07-11 23:34:02.453497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.725 [2024-07-11 23:34:02.453528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.725 [2024-07-11 23:34:02.464687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.725 [2024-07-11 23:34:02.464718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.725 [2024-07-11 23:34:02.475537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.725 [2024-07-11 23:34:02.475568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.725 [2024-07-11 23:34:02.487223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.725 [2024-07-11 23:34:02.487254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.725 [2024-07-11 23:34:02.498370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.725 [2024-07-11 23:34:02.498401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.725 [2024-07-11 23:34:02.509844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.725 [2024-07-11 23:34:02.509875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.725 [2024-07-11 23:34:02.521298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.726 [2024-07-11 23:34:02.521329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.726 [2024-07-11 23:34:02.532438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.726 [2024-07-11 23:34:02.532468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.726 [2024-07-11 23:34:02.543405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.726 [2024-07-11 23:34:02.543437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.726 [2024-07-11 23:34:02.554572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.726 [2024-07-11 23:34:02.554603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.726 [2024-07-11 23:34:02.565824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.726 [2024-07-11 23:34:02.565856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.726 [2024-07-11 23:34:02.577092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.726 [2024-07-11 23:34:02.577122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.726 [2024-07-11 23:34:02.588414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.726 [2024-07-11 23:34:02.588451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.726 [2024-07-11 23:34:02.599952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.726 [2024-07-11 23:34:02.599983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.726 [2024-07-11 23:34:02.611480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.726 [2024-07-11 23:34:02.611510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.726 [2024-07-11 23:34:02.622921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.726 [2024-07-11 23:34:02.622953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.726 [2024-07-11 23:34:02.634151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.726 [2024-07-11 23:34:02.634191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.726 [2024-07-11 23:34:02.645366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.726 [2024-07-11 23:34:02.645396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.726 [2024-07-11 23:34:02.656496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.726 [2024-07-11 23:34:02.656527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.726 [2024-07-11 23:34:02.667752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.726 [2024-07-11 23:34:02.667783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.678946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.678977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.690259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.690290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.701400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.701431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.712530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.712562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.723915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.723946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.734562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.734592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.745989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.746020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.757397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.757428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.767772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.767803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.779801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.779832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.791045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.791075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.802554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.802585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.813665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.813696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.824885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.824915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.836165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.836205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.848968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.848999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.858885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.858916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.870643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.870674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.881801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.881833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.893190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.893221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.904584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.904616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.915896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.915928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:41.984 [2024-07-11 23:34:02.927198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:41.984 [2024-07-11 23:34:02.927229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:02.938362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:02.938402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:02.949547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:02.949579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:02.961097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:02.961128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:02.972329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:02.972360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:02.983443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:02.983477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:02.994488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:02.994518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.005804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.005835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.017393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.017426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.028953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.028983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.040399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.040429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.051643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.051680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.062949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.062979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.073971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.074001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.085127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.085168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.096540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.096570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.107672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.107702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.119134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.119175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.130611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.130642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.141132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.141171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.154977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.155008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.165499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.165528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.177003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.242 [2024-07-11 23:34:03.177032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.242 [2024-07-11 23:34:03.188489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.243 [2024-07-11 23:34:03.188519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.199790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.199820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.211539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.211570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.223155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.223185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.234504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.234534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.245378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.245407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.256438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.256468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.267762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.267800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.278777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.278807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.289871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.289901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.301215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.301245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.312447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.312477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.323680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.323709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.333996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.334026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.345435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.345466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.356879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.356909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.368529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.368560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.379561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.379591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.392172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.392202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.402347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.402377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.414527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.414557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.425829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.425859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.437155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.437185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.501 [2024-07-11 23:34:03.448369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.501 [2024-07-11 23:34:03.448400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.759 [2024-07-11 23:34:03.459331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.759 [2024-07-11 23:34:03.459361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.759 [2024-07-11 23:34:03.470702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.759 [2024-07-11 23:34:03.470732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.759 [2024-07-11 23:34:03.482114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.759 [2024-07-11 23:34:03.482162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.759 [2024-07-11 23:34:03.493920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.759 [2024-07-11 23:34:03.493950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.759 [2024-07-11 23:34:03.505196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.505226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.517865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.517895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.528024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.528054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.539492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.539522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.550196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.550226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.561757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.561788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.573406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.573435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.584814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.584843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.595962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.595991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.607457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.607487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.618977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.619006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.630150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.630180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.641270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.641300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.652438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.652468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.663444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.663474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.674449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.674479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.685489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.685519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.696898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.696929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:42.760 [2024-07-11 23:34:03.708086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:42.760 [2024-07-11 23:34:03.708117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.719250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.719281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.730416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.730446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.741507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.741537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.752576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.752607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.763553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.763583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.774753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.774783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.786062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.786091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.797566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.797596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.808635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.808664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.819770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.819801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.830971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.831001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.842521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.842551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.853552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.853581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.864847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.864876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.876132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.876170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.886476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.886505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.897607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.897637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.908530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.908561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.919796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.919826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.930821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.930851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.942223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.942252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.953502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.953533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.019 [2024-07-11 23:34:03.964916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.019 [2024-07-11 23:34:03.964946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:03.976449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:03.976479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:03.987694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:03.987724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:03.998381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:03.998410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.009636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.009666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.020795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.020826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.032171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.032213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.043245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.043276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.054362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.054404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.065424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.065454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.076884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.076914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.087548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.087577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.102221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.102252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.112671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.112702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.124317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.124348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.135118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.135157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.145558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.145591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.157345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.157376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.168690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.168719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.179760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.179789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.191281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.191311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.202581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.202611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.213670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.213700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.278 [2024-07-11 23:34:04.225137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.278 [2024-07-11 23:34:04.225176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.535 [2024-07-11 23:34:04.236556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.535 [2024-07-11 23:34:04.236586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.535 [2024-07-11 23:34:04.247836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.535 [2024-07-11 23:34:04.247866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.535 [2024-07-11 23:34:04.258943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.258973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.270417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.270447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.281951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.281982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.293246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.293277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.304507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.304537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.315546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.315576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.326529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.326558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.338018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.338048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.349353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.349383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.360266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.360296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.371365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.371394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.382757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.382787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.394070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.394099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.405928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.405957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.419436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.419466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.429095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.429125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.441203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.441234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.452384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.452414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.463161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.463191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.536 [2024-07-11 23:34:04.474966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.536 [2024-07-11 23:34:04.474996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.486050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.486081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.497614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.497644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.509219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.509253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.520306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.520336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.531767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.531797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.543113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.543160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.554101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.554132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.565415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.565444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.576582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.576611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.587824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.587854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.598785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.598814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.609231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.609261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.620072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.620102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.631073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.631103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.642298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.642328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.653404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.653434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.664552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.664582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.675590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.675620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.686349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.686379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.697428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.697458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.708417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.708447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.719086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.719116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.727064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.727093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 00:20:43.793 Latency(us) 00:20:43.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.793 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:20:43.793 Nvme1n1 : 5.01 11362.78 88.77 0.00 0.00 11250.17 5024.43 20680.25 00:20:43.793 =================================================================================================================== 00:20:43.793 Total : 11362.78 88.77 0.00 0.00 11250.17 5024.43 20680.25 00:20:43.793 [2024-07-11 23:34:04.735082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.735111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:43.793 [2024-07-11 23:34:04.743099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:43.793 [2024-07-11 23:34:04.743128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.751186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.751235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.759199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.759245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.767217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.767264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.775241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.775286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.783256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.783300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.791285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.791331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.799302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.799347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.807322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.807367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.815345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.815389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.823370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.823415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.831391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.831435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.839420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.839476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.847439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.847483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.855447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.855490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.863479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.863522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.871490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.871544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.879485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.879515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.887509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.887538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.895568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.895612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.903585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.903630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.911599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.911641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.919587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.919613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.927625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.927657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.935674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.935718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.943691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.943733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.951674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.951699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.959693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.959716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 [2024-07-11 23:34:04.967716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:44.051 [2024-07-11 23:34:04.967739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:44.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (265236) - No such process 00:20:44.051 23:34:04 -- target/zcopy.sh@49 -- # wait 265236 00:20:44.051 23:34:04 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:44.051 23:34:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:44.051 23:34:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.051 23:34:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:44.051 23:34:04 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:44.051 23:34:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:44.051 23:34:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.051 delay0 00:20:44.051 23:34:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:44.051 23:34:04 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:20:44.051 23:34:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:44.051 23:34:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.051 23:34:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:44.051 23:34:05 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:20:44.308 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.308 [2024-07-11 23:34:05.137360] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:50.863 Initializing NVMe Controllers 00:20:50.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:50.863 Initialization complete. Launching workers. 00:20:50.863 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 175 00:20:50.863 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 462, failed to submit 33 00:20:50.863 success 288, unsuccess 174, failed 0 00:20:50.863 23:34:11 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:20:50.863 23:34:11 -- target/zcopy.sh@60 -- # nvmftestfini 00:20:50.863 23:34:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:50.863 23:34:11 -- nvmf/common.sh@116 -- # sync 00:20:50.863 23:34:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:50.863 23:34:11 -- nvmf/common.sh@119 -- # set +e 00:20:50.863 23:34:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:50.863 23:34:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:50.863 rmmod nvme_tcp 00:20:50.863 rmmod nvme_fabrics 00:20:50.863 rmmod nvme_keyring 00:20:50.863 23:34:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:50.863 23:34:11 -- nvmf/common.sh@123 -- # set -e 00:20:50.863 23:34:11 -- nvmf/common.sh@124 -- # return 0 00:20:50.863 23:34:11 -- nvmf/common.sh@477 -- # '[' -n 263848 ']' 00:20:50.863 23:34:11 -- nvmf/common.sh@478 -- # killprocess 263848 00:20:50.863 23:34:11 -- common/autotest_common.sh@926 -- # '[' -z 263848 ']' 00:20:50.863 23:34:11 -- common/autotest_common.sh@930 -- # kill -0 263848 00:20:50.863 23:34:11 -- common/autotest_common.sh@931 -- # uname 00:20:50.863 23:34:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:50.863 23:34:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 263848 00:20:50.863 23:34:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:50.863 23:34:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:50.863 23:34:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 263848' 00:20:50.863 killing process with pid 263848 00:20:50.863 23:34:11 -- common/autotest_common.sh@945 -- # kill 263848 00:20:50.863 23:34:11 -- common/autotest_common.sh@950 -- # wait 263848 00:20:50.863 23:34:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:50.863 23:34:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:50.863 23:34:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:50.863 23:34:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:50.863 23:34:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:50.863 23:34:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.863 23:34:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.863 23:34:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.769 23:34:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:52.769 00:20:52.769 real 0m29.251s 00:20:52.769 user 0m41.501s 00:20:52.769 sys 0m9.728s 00:20:52.769 23:34:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:52.769 23:34:13 -- common/autotest_common.sh@10 -- # set +x 00:20:52.769 ************************************ 00:20:52.769 END TEST nvmf_zcopy 00:20:52.769 ************************************ 00:20:52.769 23:34:13 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:52.769 23:34:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:52.769 23:34:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:52.769 23:34:13 -- common/autotest_common.sh@10 -- # set +x 00:20:52.769 ************************************ 00:20:52.769 START TEST nvmf_nmic 00:20:52.769 ************************************ 00:20:52.769 23:34:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:53.029 * Looking for test storage... 00:20:53.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:53.029 23:34:13 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.029 23:34:13 -- nvmf/common.sh@7 -- # uname -s 00:20:53.029 23:34:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.029 23:34:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.029 23:34:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.029 23:34:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.029 23:34:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.029 23:34:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.029 23:34:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.029 23:34:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.029 23:34:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.029 23:34:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.029 23:34:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:53.029 23:34:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:53.029 23:34:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.029 23:34:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.029 23:34:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.029 23:34:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.029 23:34:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.029 23:34:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.029 23:34:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.029 23:34:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.029 23:34:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.029 23:34:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.029 23:34:13 -- paths/export.sh@5 -- # export PATH 00:20:53.029 23:34:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.029 23:34:13 -- nvmf/common.sh@46 -- # : 0 00:20:53.029 23:34:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:53.029 23:34:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:53.029 23:34:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:53.029 23:34:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.029 23:34:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.029 23:34:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:53.029 23:34:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:53.029 23:34:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:53.029 23:34:13 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:53.029 23:34:13 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:53.029 23:34:13 -- target/nmic.sh@14 -- # nvmftestinit 00:20:53.029 23:34:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:53.029 23:34:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.029 23:34:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:53.029 23:34:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:53.029 23:34:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:53.029 23:34:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.029 23:34:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.029 23:34:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.029 23:34:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:53.029 23:34:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:53.029 23:34:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:53.029 23:34:13 -- common/autotest_common.sh@10 -- # set +x 00:20:55.567 23:34:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:55.567 23:34:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:55.567 23:34:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:55.567 23:34:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:55.567 23:34:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:55.567 23:34:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:55.567 23:34:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:55.567 23:34:16 -- nvmf/common.sh@294 -- # net_devs=() 00:20:55.567 23:34:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:55.567 23:34:16 -- nvmf/common.sh@295 -- # e810=() 00:20:55.567 23:34:16 -- nvmf/common.sh@295 -- # local -ga e810 00:20:55.567 23:34:16 -- nvmf/common.sh@296 -- # x722=() 00:20:55.567 23:34:16 -- nvmf/common.sh@296 -- # local -ga x722 00:20:55.567 23:34:16 -- nvmf/common.sh@297 -- # mlx=() 00:20:55.567 23:34:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:55.567 23:34:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.567 23:34:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.567 23:34:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.567 23:34:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.567 23:34:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.567 23:34:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.567 23:34:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.567 23:34:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.567 23:34:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.567 23:34:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.567 23:34:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.567 23:34:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:55.567 23:34:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:55.567 23:34:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:55.567 23:34:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:55.567 23:34:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:55.567 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:55.567 23:34:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:55.567 23:34:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:55.567 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:55.567 23:34:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:55.567 23:34:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:55.567 23:34:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.567 23:34:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:55.567 23:34:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.567 23:34:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:55.567 Found net devices under 0000:84:00.0: cvl_0_0 00:20:55.567 23:34:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.567 23:34:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:55.567 23:34:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.567 23:34:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:55.567 23:34:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.567 23:34:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:55.567 Found net devices under 0000:84:00.1: cvl_0_1 00:20:55.567 23:34:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.567 23:34:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:55.567 23:34:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:55.567 23:34:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:55.567 23:34:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:55.567 23:34:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.567 23:34:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.567 23:34:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.567 23:34:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:55.567 23:34:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.567 23:34:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.567 23:34:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:55.567 23:34:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.567 23:34:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.567 23:34:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:55.567 23:34:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:55.567 23:34:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.567 23:34:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.567 23:34:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.567 23:34:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.567 23:34:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:55.567 23:34:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.826 23:34:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.826 23:34:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.826 23:34:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:55.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:20:55.826 00:20:55.826 --- 10.0.0.2 ping statistics --- 00:20:55.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.826 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:20:55.826 23:34:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:20:55.826 00:20:55.826 --- 10.0.0.1 ping statistics --- 00:20:55.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.826 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:20:55.826 23:34:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.826 23:34:16 -- nvmf/common.sh@410 -- # return 0 00:20:55.826 23:34:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:55.826 23:34:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.826 23:34:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:55.826 23:34:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:55.826 23:34:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.826 23:34:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:55.826 23:34:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:55.826 23:34:16 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:55.826 23:34:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:55.826 23:34:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:55.826 23:34:16 -- common/autotest_common.sh@10 -- # set +x 00:20:55.826 23:34:16 -- nvmf/common.sh@469 -- # nvmfpid=268696 00:20:55.826 23:34:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:55.826 23:34:16 -- nvmf/common.sh@470 -- # waitforlisten 268696 00:20:55.826 23:34:16 -- common/autotest_common.sh@819 -- # '[' -z 268696 ']' 00:20:55.826 23:34:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.826 23:34:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:55.826 23:34:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.826 23:34:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:55.826 23:34:16 -- common/autotest_common.sh@10 -- # set +x 00:20:55.826 [2024-07-11 23:34:16.698484] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:55.826 [2024-07-11 23:34:16.698652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.084 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.084 [2024-07-11 23:34:16.824543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.084 [2024-07-11 23:34:16.922632] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:56.084 [2024-07-11 23:34:16.922807] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.084 [2024-07-11 23:34:16.922826] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.084 [2024-07-11 23:34:16.922841] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.084 [2024-07-11 23:34:16.922927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.084 [2024-07-11 23:34:16.923074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.084 [2024-07-11 23:34:16.923125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.084 [2024-07-11 23:34:16.923128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.341 23:34:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:56.341 23:34:17 -- common/autotest_common.sh@852 -- # return 0 00:20:56.341 23:34:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:56.341 23:34:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:56.341 23:34:17 -- common/autotest_common.sh@10 -- # set +x 00:20:56.341 23:34:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.341 23:34:17 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:56.341 23:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:56.341 23:34:17 -- common/autotest_common.sh@10 -- # set +x 00:20:56.341 [2024-07-11 23:34:17.183958] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.341 23:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:56.341 23:34:17 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:56.341 23:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:56.341 23:34:17 -- common/autotest_common.sh@10 -- # set +x 00:20:56.341 Malloc0 00:20:56.341 23:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:56.341 23:34:17 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:56.341 23:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:56.341 23:34:17 -- common/autotest_common.sh@10 -- # set +x 00:20:56.341 23:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:56.341 23:34:17 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:56.341 23:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:56.341 23:34:17 -- common/autotest_common.sh@10 -- # set +x 00:20:56.341 23:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:56.341 23:34:17 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:56.341 23:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:56.341 23:34:17 -- common/autotest_common.sh@10 -- # set +x 00:20:56.341 [2024-07-11 23:34:17.238783] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.341 23:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:56.341 23:34:17 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:56.341 test case1: single bdev can't be used in multiple subsystems 00:20:56.341 23:34:17 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:56.341 23:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:56.341 23:34:17 -- common/autotest_common.sh@10 -- # set +x 00:20:56.341 23:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:56.341 23:34:17 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:56.341 23:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:56.341 23:34:17 -- common/autotest_common.sh@10 -- # set +x 00:20:56.341 23:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:56.341 23:34:17 -- target/nmic.sh@28 -- # nmic_status=0 00:20:56.341 23:34:17 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:56.341 23:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:56.341 23:34:17 -- common/autotest_common.sh@10 -- # set +x 00:20:56.341 [2024-07-11 23:34:17.262599] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:56.341 [2024-07-11 23:34:17.262632] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:56.342 [2024-07-11 23:34:17.262648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:56.342 request: 00:20:56.342 { 00:20:56.342 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:56.342 "namespace": { 00:20:56.342 "bdev_name": "Malloc0" 00:20:56.342 }, 00:20:56.342 "method": "nvmf_subsystem_add_ns", 00:20:56.342 "req_id": 1 00:20:56.342 } 00:20:56.342 Got JSON-RPC error response 00:20:56.342 response: 00:20:56.342 { 00:20:56.342 "code": -32602, 00:20:56.342 "message": "Invalid parameters" 00:20:56.342 } 00:20:56.342 23:34:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:56.342 23:34:17 -- target/nmic.sh@29 -- # nmic_status=1 00:20:56.342 23:34:17 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:56.342 23:34:17 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:56.342 Adding namespace failed - expected result. 00:20:56.342 23:34:17 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:56.342 test case2: host connect to nvmf target in multiple paths 00:20:56.342 23:34:17 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:56.342 23:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:56.342 23:34:17 -- common/autotest_common.sh@10 -- # set +x 00:20:56.342 [2024-07-11 23:34:17.270731] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:56.342 23:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:56.342 23:34:17 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:57.276 23:34:17 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:57.841 23:34:18 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:57.841 23:34:18 -- common/autotest_common.sh@1177 -- # local i=0 00:20:57.841 23:34:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:57.841 23:34:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:57.841 23:34:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:59.738 23:34:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:59.738 23:34:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:59.738 23:34:20 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:59.738 23:34:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:59.738 23:34:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:59.738 23:34:20 -- common/autotest_common.sh@1187 -- # return 0 00:20:59.738 23:34:20 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:59.738 [global] 00:20:59.738 thread=1 00:20:59.738 invalidate=1 00:20:59.738 rw=write 00:20:59.738 time_based=1 00:20:59.738 runtime=1 00:20:59.738 ioengine=libaio 00:20:59.738 direct=1 00:20:59.738 bs=4096 00:20:59.738 iodepth=1 00:20:59.738 norandommap=0 00:20:59.738 numjobs=1 00:20:59.738 00:20:59.738 verify_dump=1 00:20:59.738 verify_backlog=512 00:20:59.738 verify_state_save=0 00:20:59.738 do_verify=1 00:20:59.738 verify=crc32c-intel 00:20:59.738 [job0] 00:20:59.738 filename=/dev/nvme0n1 00:20:59.738 Could not set queue depth (nvme0n1) 00:20:59.995 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:59.995 fio-3.35 00:20:59.995 Starting 1 thread 00:21:01.368 00:21:01.368 job0: (groupid=0, jobs=1): err= 0: pid=269346: Thu Jul 11 23:34:21 2024 00:21:01.368 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:21:01.368 slat (nsec): min=5796, max=66670, avg=24221.23, stdev=10297.55 00:21:01.368 clat (usec): min=316, max=41290, avg=1083.80, stdev=4355.40 00:21:01.368 lat (usec): min=326, max=41308, avg=1108.02, stdev=4355.28 00:21:01.368 clat percentiles (usec): 00:21:01.368 | 1.00th=[ 326], 5.00th=[ 359], 10.00th=[ 388], 20.00th=[ 433], 00:21:01.368 | 30.00th=[ 461], 40.00th=[ 490], 50.00th=[ 570], 60.00th=[ 734], 00:21:01.368 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 857], 95.00th=[ 906], 00:21:01.368 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:01.368 | 99.99th=[41157] 00:21:01.368 write: IOPS=960, BW=3840KiB/s (3932kB/s)(3844KiB/1001msec); 0 zone resets 00:21:01.368 slat (usec): min=8, max=40620, avg=95.02, stdev=1595.97 00:21:01.368 clat (usec): min=197, max=731, avg=343.60, stdev=144.84 00:21:01.368 lat (usec): min=208, max=40975, avg=438.62, stdev=1606.36 00:21:01.368 clat percentiles (usec): 00:21:01.368 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 221], 00:21:01.368 | 30.00th=[ 227], 40.00th=[ 239], 50.00th=[ 255], 60.00th=[ 306], 00:21:01.368 | 70.00th=[ 453], 80.00th=[ 515], 90.00th=[ 586], 95.00th=[ 594], 00:21:01.368 | 99.00th=[ 635], 99.50th=[ 644], 99.90th=[ 734], 99.95th=[ 734], 00:21:01.368 | 99.99th=[ 734] 00:21:01.368 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:21:01.368 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:01.368 lat (usec) : 250=31.23%, 500=34.69%, 750=20.71%, 1000=12.97% 00:21:01.368 lat (msec) : 50=0.41% 00:21:01.368 cpu : usr=1.80%, sys=3.50%, ctx=1476, majf=0, minf=2 00:21:01.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.368 issued rwts: total=512,961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.368 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:01.368 00:21:01.368 Run status group 0 (all jobs): 00:21:01.368 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:21:01.368 WRITE: bw=3840KiB/s (3932kB/s), 3840KiB/s-3840KiB/s (3932kB/s-3932kB/s), io=3844KiB (3936kB), run=1001-1001msec 00:21:01.368 00:21:01.368 Disk stats (read/write): 00:21:01.368 nvme0n1: ios=564/724, merge=0/0, ticks=1110/234, in_queue=1344, util=99.60% 00:21:01.368 23:34:21 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:01.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:01.368 23:34:22 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:01.368 23:34:22 -- common/autotest_common.sh@1198 -- # local i=0 00:21:01.368 23:34:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:01.368 23:34:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:01.368 23:34:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:01.368 23:34:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:01.368 23:34:22 -- common/autotest_common.sh@1210 -- # return 0 00:21:01.368 23:34:22 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:01.368 23:34:22 -- target/nmic.sh@53 -- # nvmftestfini 00:21:01.368 23:34:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:01.368 23:34:22 -- nvmf/common.sh@116 -- # sync 00:21:01.368 23:34:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:01.368 23:34:22 -- nvmf/common.sh@119 -- # set +e 00:21:01.368 23:34:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:01.368 23:34:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:01.368 rmmod nvme_tcp 00:21:01.368 rmmod nvme_fabrics 00:21:01.368 rmmod nvme_keyring 00:21:01.368 23:34:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:01.368 23:34:22 -- nvmf/common.sh@123 -- # set -e 00:21:01.368 23:34:22 -- nvmf/common.sh@124 -- # return 0 00:21:01.368 23:34:22 -- nvmf/common.sh@477 -- # '[' -n 268696 ']' 00:21:01.368 23:34:22 -- nvmf/common.sh@478 -- # killprocess 268696 00:21:01.368 23:34:22 -- common/autotest_common.sh@926 -- # '[' -z 268696 ']' 00:21:01.368 23:34:22 -- common/autotest_common.sh@930 -- # kill -0 268696 00:21:01.368 23:34:22 -- common/autotest_common.sh@931 -- # uname 00:21:01.368 23:34:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:01.368 23:34:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 268696 00:21:01.368 23:34:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:01.368 23:34:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:01.368 23:34:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 268696' 00:21:01.368 killing process with pid 268696 00:21:01.368 23:34:22 -- common/autotest_common.sh@945 -- # kill 268696 00:21:01.368 23:34:22 -- common/autotest_common.sh@950 -- # wait 268696 00:21:01.628 23:34:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:01.628 23:34:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:01.628 23:34:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:01.628 23:34:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.628 23:34:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:01.628 23:34:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.628 23:34:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.628 23:34:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.187 23:34:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:04.187 00:21:04.187 real 0m10.831s 00:21:04.187 user 0m23.443s 00:21:04.187 sys 0m3.066s 00:21:04.187 23:34:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:04.187 23:34:24 -- common/autotest_common.sh@10 -- # set +x 00:21:04.187 ************************************ 00:21:04.187 END TEST nvmf_nmic 00:21:04.187 ************************************ 00:21:04.187 23:34:24 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:04.187 23:34:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:04.187 23:34:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:04.187 23:34:24 -- common/autotest_common.sh@10 -- # set +x 00:21:04.187 ************************************ 00:21:04.187 START TEST nvmf_fio_target 00:21:04.187 ************************************ 00:21:04.187 23:34:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:04.187 * Looking for test storage... 00:21:04.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:04.187 23:34:24 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.187 23:34:24 -- nvmf/common.sh@7 -- # uname -s 00:21:04.187 23:34:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.187 23:34:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.187 23:34:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.187 23:34:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.187 23:34:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.187 23:34:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.187 23:34:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.187 23:34:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.187 23:34:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.187 23:34:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.187 23:34:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:04.187 23:34:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:04.187 23:34:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.187 23:34:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.187 23:34:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.187 23:34:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.187 23:34:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.187 23:34:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.187 23:34:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.187 23:34:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.187 23:34:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.187 23:34:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.187 23:34:24 -- paths/export.sh@5 -- # export PATH 00:21:04.187 23:34:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.187 23:34:24 -- nvmf/common.sh@46 -- # : 0 00:21:04.187 23:34:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:04.187 23:34:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:04.187 23:34:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:04.187 23:34:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.187 23:34:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.187 23:34:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:04.187 23:34:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:04.187 23:34:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:04.187 23:34:24 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:04.187 23:34:24 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:04.187 23:34:24 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:04.187 23:34:24 -- target/fio.sh@16 -- # nvmftestinit 00:21:04.187 23:34:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:04.187 23:34:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.187 23:34:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:04.187 23:34:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:04.187 23:34:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:04.187 23:34:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.187 23:34:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.187 23:34:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.187 23:34:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:04.187 23:34:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:04.187 23:34:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:04.187 23:34:24 -- common/autotest_common.sh@10 -- # set +x 00:21:06.746 23:34:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:06.746 23:34:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:06.746 23:34:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:06.746 23:34:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:06.746 23:34:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:06.746 23:34:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:06.746 23:34:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:06.746 23:34:27 -- nvmf/common.sh@294 -- # net_devs=() 00:21:06.746 23:34:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:06.746 23:34:27 -- nvmf/common.sh@295 -- # e810=() 00:21:06.746 23:34:27 -- nvmf/common.sh@295 -- # local -ga e810 00:21:06.746 23:34:27 -- nvmf/common.sh@296 -- # x722=() 00:21:06.746 23:34:27 -- nvmf/common.sh@296 -- # local -ga x722 00:21:06.746 23:34:27 -- nvmf/common.sh@297 -- # mlx=() 00:21:06.746 23:34:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:06.746 23:34:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.746 23:34:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.746 23:34:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.746 23:34:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.746 23:34:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.746 23:34:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.746 23:34:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.746 23:34:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.746 23:34:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.746 23:34:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.746 23:34:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.746 23:34:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:06.746 23:34:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:06.746 23:34:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:06.746 23:34:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:06.746 23:34:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:06.746 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:06.746 23:34:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:06.746 23:34:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:06.746 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:06.746 23:34:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:06.746 23:34:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:06.746 23:34:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.746 23:34:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:06.746 23:34:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.746 23:34:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:06.746 Found net devices under 0000:84:00.0: cvl_0_0 00:21:06.746 23:34:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.746 23:34:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:06.746 23:34:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.746 23:34:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:06.746 23:34:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.746 23:34:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:06.746 Found net devices under 0000:84:00.1: cvl_0_1 00:21:06.746 23:34:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.746 23:34:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:06.746 23:34:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:06.746 23:34:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:06.746 23:34:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.746 23:34:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.746 23:34:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.746 23:34:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:06.746 23:34:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.746 23:34:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.746 23:34:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:06.746 23:34:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.746 23:34:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.746 23:34:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:06.746 23:34:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:06.746 23:34:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.746 23:34:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.746 23:34:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.746 23:34:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.746 23:34:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:06.746 23:34:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.746 23:34:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.746 23:34:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.746 23:34:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:06.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:21:06.746 00:21:06.746 --- 10.0.0.2 ping statistics --- 00:21:06.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.746 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:21:06.746 23:34:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:21:06.746 00:21:06.746 --- 10.0.0.1 ping statistics --- 00:21:06.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.746 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:21:06.746 23:34:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.746 23:34:27 -- nvmf/common.sh@410 -- # return 0 00:21:06.746 23:34:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:06.746 23:34:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.746 23:34:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:06.746 23:34:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.746 23:34:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:06.746 23:34:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:06.746 23:34:27 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:21:06.746 23:34:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:06.746 23:34:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:06.746 23:34:27 -- common/autotest_common.sh@10 -- # set +x 00:21:06.746 23:34:27 -- nvmf/common.sh@469 -- # nvmfpid=271570 00:21:06.747 23:34:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:06.747 23:34:27 -- nvmf/common.sh@470 -- # waitforlisten 271570 00:21:06.747 23:34:27 -- common/autotest_common.sh@819 -- # '[' -z 271570 ']' 00:21:06.747 23:34:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.747 23:34:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:06.747 23:34:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.747 23:34:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:06.747 23:34:27 -- common/autotest_common.sh@10 -- # set +x 00:21:06.747 [2024-07-11 23:34:27.507745] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:06.747 [2024-07-11 23:34:27.507911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.747 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.747 [2024-07-11 23:34:27.618179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.005 [2024-07-11 23:34:27.713260] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:07.005 [2024-07-11 23:34:27.713413] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.005 [2024-07-11 23:34:27.713433] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.005 [2024-07-11 23:34:27.713448] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.005 [2024-07-11 23:34:27.713517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.005 [2024-07-11 23:34:27.713573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.005 [2024-07-11 23:34:27.713664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.005 [2024-07-11 23:34:27.713668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.939 23:34:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:07.939 23:34:28 -- common/autotest_common.sh@852 -- # return 0 00:21:07.939 23:34:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:07.939 23:34:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:07.939 23:34:28 -- common/autotest_common.sh@10 -- # set +x 00:21:07.939 23:34:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.939 23:34:28 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:08.196 [2024-07-11 23:34:29.071663] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.196 23:34:29 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:08.761 23:34:29 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:21:08.761 23:34:29 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:09.019 23:34:29 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:21:09.019 23:34:29 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:09.277 23:34:30 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:21:09.277 23:34:30 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:09.843 23:34:30 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:21:09.843 23:34:30 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:21:10.407 23:34:31 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:10.972 23:34:31 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:21:10.972 23:34:31 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:11.230 23:34:31 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:21:11.230 23:34:31 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:11.488 23:34:32 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:21:11.488 23:34:32 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:21:11.746 23:34:32 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:12.003 23:34:32 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:12.003 23:34:32 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:12.260 23:34:33 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:12.260 23:34:33 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:12.824 23:34:33 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.081 [2024-07-11 23:34:33.799563] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.081 23:34:33 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:21:13.645 23:34:34 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:21:13.903 23:34:34 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:14.467 23:34:35 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:21:14.467 23:34:35 -- common/autotest_common.sh@1177 -- # local i=0 00:21:14.467 23:34:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:14.467 23:34:35 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:21:14.467 23:34:35 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:21:14.467 23:34:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:16.361 23:34:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:16.361 23:34:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:16.361 23:34:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:21:16.361 23:34:37 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:21:16.361 23:34:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:16.361 23:34:37 -- common/autotest_common.sh@1187 -- # return 0 00:21:16.361 23:34:37 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:16.361 [global] 00:21:16.361 thread=1 00:21:16.361 invalidate=1 00:21:16.361 rw=write 00:21:16.361 time_based=1 00:21:16.361 runtime=1 00:21:16.361 ioengine=libaio 00:21:16.361 direct=1 00:21:16.361 bs=4096 00:21:16.361 iodepth=1 00:21:16.361 norandommap=0 00:21:16.361 numjobs=1 00:21:16.361 00:21:16.361 verify_dump=1 00:21:16.361 verify_backlog=512 00:21:16.361 verify_state_save=0 00:21:16.361 do_verify=1 00:21:16.361 verify=crc32c-intel 00:21:16.361 [job0] 00:21:16.361 filename=/dev/nvme0n1 00:21:16.361 [job1] 00:21:16.361 filename=/dev/nvme0n2 00:21:16.361 [job2] 00:21:16.361 filename=/dev/nvme0n3 00:21:16.361 [job3] 00:21:16.361 filename=/dev/nvme0n4 00:21:16.619 Could not set queue depth (nvme0n1) 00:21:16.619 Could not set queue depth (nvme0n2) 00:21:16.619 Could not set queue depth (nvme0n3) 00:21:16.619 Could not set queue depth (nvme0n4) 00:21:16.619 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:16.619 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:16.619 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:16.619 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:16.619 fio-3.35 00:21:16.619 Starting 4 threads 00:21:17.991 00:21:17.992 job0: (groupid=0, jobs=1): err= 0: pid=272817: Thu Jul 11 23:34:38 2024 00:21:17.992 read: IOPS=20, BW=82.1KiB/s (84.1kB/s)(84.0KiB/1023msec) 00:21:17.992 slat (nsec): min=17511, max=37269, avg=21099.43, stdev=6386.71 00:21:17.992 clat (usec): min=40923, max=41076, avg=40977.93, stdev=40.73 00:21:17.992 lat (usec): min=40941, max=41094, avg=40999.03, stdev=39.33 00:21:17.992 clat percentiles (usec): 00:21:17.992 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:21:17.992 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:17.992 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:17.992 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:17.992 | 99.99th=[41157] 00:21:17.992 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:21:17.992 slat (nsec): min=7570, max=89412, avg=17739.24, stdev=8018.27 00:21:17.992 clat (usec): min=201, max=536, avg=293.47, stdev=58.22 00:21:17.992 lat (usec): min=220, max=557, avg=311.21, stdev=59.89 00:21:17.992 clat percentiles (usec): 00:21:17.992 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 245], 00:21:17.992 | 30.00th=[ 260], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:21:17.992 | 70.00th=[ 306], 80.00th=[ 322], 90.00th=[ 367], 95.00th=[ 424], 00:21:17.992 | 99.00th=[ 490], 99.50th=[ 510], 99.90th=[ 537], 99.95th=[ 537], 00:21:17.992 | 99.99th=[ 537] 00:21:17.992 bw ( KiB/s): min= 4096, max= 4096, per=36.91%, avg=4096.00, stdev= 0.00, samples=1 00:21:17.992 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:17.992 lat (usec) : 250=23.26%, 500=72.23%, 750=0.56% 00:21:17.992 lat (msec) : 50=3.94% 00:21:17.992 cpu : usr=0.29%, sys=0.98%, ctx=535, majf=0, minf=2 00:21:17.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.992 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:17.992 job1: (groupid=0, jobs=1): err= 0: pid=272818: Thu Jul 11 23:34:38 2024 00:21:17.992 read: IOPS=487, BW=1950KiB/s (1997kB/s)(2016KiB/1034msec) 00:21:17.992 slat (nsec): min=9828, max=35036, avg=12551.46, stdev=3788.93 00:21:17.992 clat (usec): min=407, max=41653, avg=1702.32, stdev=6652.40 00:21:17.992 lat (usec): min=418, max=41685, avg=1714.87, stdev=6653.69 00:21:17.992 clat percentiles (usec): 00:21:17.992 | 1.00th=[ 424], 5.00th=[ 449], 10.00th=[ 465], 20.00th=[ 482], 00:21:17.992 | 30.00th=[ 502], 40.00th=[ 523], 50.00th=[ 537], 60.00th=[ 570], 00:21:17.992 | 70.00th=[ 619], 80.00th=[ 701], 90.00th=[ 816], 95.00th=[ 865], 00:21:17.992 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:21:17.992 | 99.99th=[41681] 00:21:17.992 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:21:17.992 slat (nsec): min=9774, max=97706, avg=15711.29, stdev=10078.58 00:21:17.992 clat (usec): min=193, max=1307, avg=306.32, stdev=78.60 00:21:17.992 lat (usec): min=222, max=1325, avg=322.03, stdev=79.21 00:21:17.992 clat percentiles (usec): 00:21:17.992 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 249], 20.00th=[ 265], 00:21:17.992 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 297], 00:21:17.992 | 70.00th=[ 314], 80.00th=[ 347], 90.00th=[ 388], 95.00th=[ 412], 00:21:17.992 | 99.00th=[ 465], 99.50th=[ 742], 99.90th=[ 1303], 99.95th=[ 1303], 00:21:17.992 | 99.99th=[ 1303] 00:21:17.992 bw ( KiB/s): min= 4096, max= 4096, per=36.91%, avg=4096.00, stdev= 0.00, samples=1 00:21:17.992 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:17.992 lat (usec) : 250=5.41%, 500=58.86%, 750=28.64%, 1000=5.51% 00:21:17.992 lat (msec) : 2=0.20%, 50=1.38% 00:21:17.992 cpu : usr=1.06%, sys=1.65%, ctx=1016, majf=0, minf=1 00:21:17.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.992 issued rwts: total=504,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:17.992 job2: (groupid=0, jobs=1): err= 0: pid=272825: Thu Jul 11 23:34:38 2024 00:21:17.992 read: IOPS=409, BW=1639KiB/s (1678kB/s)(1668KiB/1018msec) 00:21:17.992 slat (nsec): min=8655, max=49645, avg=19060.70, stdev=4716.83 00:21:17.992 clat (usec): min=429, max=41035, avg=2015.61, stdev=7528.19 00:21:17.992 lat (usec): min=439, max=41076, avg=2034.67, stdev=7531.16 00:21:17.992 clat percentiles (usec): 00:21:17.992 | 1.00th=[ 453], 5.00th=[ 482], 10.00th=[ 506], 20.00th=[ 529], 00:21:17.992 | 30.00th=[ 537], 40.00th=[ 553], 50.00th=[ 562], 60.00th=[ 578], 00:21:17.992 | 70.00th=[ 594], 80.00th=[ 611], 90.00th=[ 635], 95.00th=[ 668], 00:21:17.992 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:17.992 | 99.99th=[41157] 00:21:17.992 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:21:17.992 slat (nsec): min=8096, max=76526, avg=17889.71, stdev=7802.40 00:21:17.992 clat (usec): min=213, max=577, avg=302.72, stdev=67.26 00:21:17.992 lat (usec): min=229, max=587, avg=320.61, stdev=69.42 00:21:17.992 clat percentiles (usec): 00:21:17.992 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 251], 00:21:17.992 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 293], 00:21:17.992 | 70.00th=[ 314], 80.00th=[ 343], 90.00th=[ 424], 95.00th=[ 441], 00:21:17.992 | 99.00th=[ 498], 99.50th=[ 537], 99.90th=[ 578], 99.95th=[ 578], 00:21:17.992 | 99.99th=[ 578] 00:21:17.992 bw ( KiB/s): min= 4096, max= 4096, per=36.91%, avg=4096.00, stdev= 0.00, samples=1 00:21:17.992 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:17.992 lat (usec) : 250=10.98%, 500=47.26%, 750=40.04%, 1000=0.11% 00:21:17.992 lat (msec) : 50=1.61% 00:21:17.992 cpu : usr=1.08%, sys=1.47%, ctx=930, majf=0, minf=1 00:21:17.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.992 issued rwts: total=417,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:17.992 job3: (groupid=0, jobs=1): err= 0: pid=272832: Thu Jul 11 23:34:38 2024 00:21:17.992 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:21:17.992 slat (nsec): min=6553, max=22415, avg=7942.00, stdev=1954.26 00:21:17.992 clat (usec): min=408, max=750, avg=522.94, stdev=50.53 00:21:17.992 lat (usec): min=416, max=764, avg=530.88, stdev=50.80 00:21:17.992 clat percentiles (usec): 00:21:17.992 | 1.00th=[ 424], 5.00th=[ 449], 10.00th=[ 461], 20.00th=[ 478], 00:21:17.992 | 30.00th=[ 494], 40.00th=[ 510], 50.00th=[ 523], 60.00th=[ 537], 00:21:17.992 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 594], 95.00th=[ 611], 00:21:17.992 | 99.00th=[ 652], 99.50th=[ 693], 99.90th=[ 717], 99.95th=[ 750], 00:21:17.992 | 99.99th=[ 750] 00:21:17.992 write: IOPS=1331, BW=5327KiB/s (5455kB/s)(5332KiB/1001msec); 0 zone resets 00:21:17.992 slat (nsec): min=8408, max=58789, avg=13357.48, stdev=5070.13 00:21:17.992 clat (usec): min=215, max=574, avg=323.98, stdev=62.53 00:21:17.992 lat (usec): min=230, max=590, avg=337.33, stdev=63.82 00:21:17.992 clat percentiles (usec): 00:21:17.992 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 265], 00:21:17.992 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 330], 00:21:17.992 | 70.00th=[ 371], 80.00th=[ 400], 90.00th=[ 404], 95.00th=[ 412], 00:21:17.992 | 99.00th=[ 474], 99.50th=[ 490], 99.90th=[ 570], 99.95th=[ 578], 00:21:17.992 | 99.99th=[ 578] 00:21:17.992 bw ( KiB/s): min= 6032, max= 6032, per=54.35%, avg=6032.00, stdev= 0.00, samples=1 00:21:17.992 iops : min= 1508, max= 1508, avg=1508.00, stdev= 0.00, samples=1 00:21:17.992 lat (usec) : 250=4.92%, 500=66.31%, 750=28.72%, 1000=0.04% 00:21:17.992 cpu : usr=1.80%, sys=3.30%, ctx=2359, majf=0, minf=1 00:21:17.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.992 issued rwts: total=1024,1333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:17.992 00:21:17.992 Run status group 0 (all jobs): 00:21:17.992 READ: bw=7605KiB/s (7788kB/s), 82.1KiB/s-4092KiB/s (84.1kB/s-4190kB/s), io=7864KiB (8053kB), run=1001-1034msec 00:21:17.992 WRITE: bw=10.8MiB/s (11.4MB/s), 1981KiB/s-5327KiB/s (2028kB/s-5455kB/s), io=11.2MiB (11.8MB), run=1001-1034msec 00:21:17.992 00:21:17.992 Disk stats (read/write): 00:21:17.992 nvme0n1: ios=65/512, merge=0/0, ticks=703/149, in_queue=852, util=86.57% 00:21:17.992 nvme0n2: ios=548/512, merge=0/0, ticks=693/153, in_queue=846, util=85.91% 00:21:17.992 nvme0n3: ios=467/512, merge=0/0, ticks=647/151, in_queue=798, util=90.01% 00:21:17.992 nvme0n4: ios=937/1024, merge=0/0, ticks=625/302, in_queue=927, util=96.66% 00:21:17.992 23:34:38 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:21:17.992 [global] 00:21:17.992 thread=1 00:21:17.992 invalidate=1 00:21:17.992 rw=randwrite 00:21:17.992 time_based=1 00:21:17.992 runtime=1 00:21:17.992 ioengine=libaio 00:21:17.992 direct=1 00:21:17.992 bs=4096 00:21:17.992 iodepth=1 00:21:17.992 norandommap=0 00:21:17.992 numjobs=1 00:21:17.992 00:21:17.992 verify_dump=1 00:21:17.992 verify_backlog=512 00:21:17.992 verify_state_save=0 00:21:17.992 do_verify=1 00:21:17.992 verify=crc32c-intel 00:21:17.992 [job0] 00:21:17.992 filename=/dev/nvme0n1 00:21:17.992 [job1] 00:21:17.992 filename=/dev/nvme0n2 00:21:17.992 [job2] 00:21:17.992 filename=/dev/nvme0n3 00:21:17.992 [job3] 00:21:17.992 filename=/dev/nvme0n4 00:21:17.992 Could not set queue depth (nvme0n1) 00:21:17.992 Could not set queue depth (nvme0n2) 00:21:17.992 Could not set queue depth (nvme0n3) 00:21:17.992 Could not set queue depth (nvme0n4) 00:21:18.250 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:18.250 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:18.250 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:18.250 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:18.250 fio-3.35 00:21:18.250 Starting 4 threads 00:21:19.624 00:21:19.624 job0: (groupid=0, jobs=1): err= 0: pid=273176: Thu Jul 11 23:34:40 2024 00:21:19.624 read: IOPS=20, BW=82.4KiB/s (84.4kB/s)(84.0KiB/1019msec) 00:21:19.624 slat (nsec): min=9098, max=21839, avg=16156.48, stdev=2503.77 00:21:19.624 clat (usec): min=40821, max=41201, avg=40975.28, stdev=77.46 00:21:19.624 lat (usec): min=40836, max=41221, avg=40991.44, stdev=78.27 00:21:19.624 clat percentiles (usec): 00:21:19.624 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:21:19.624 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:19.624 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:19.624 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:19.624 | 99.99th=[41157] 00:21:19.624 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:21:19.624 slat (nsec): min=9433, max=40767, avg=12433.92, stdev=3554.85 00:21:19.624 clat (usec): min=198, max=525, avg=291.89, stdev=51.37 00:21:19.624 lat (usec): min=210, max=538, avg=304.33, stdev=51.32 00:21:19.624 clat percentiles (usec): 00:21:19.624 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 249], 00:21:19.624 | 30.00th=[ 269], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:21:19.624 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 351], 95.00th=[ 383], 00:21:19.624 | 99.00th=[ 453], 99.50th=[ 490], 99.90th=[ 529], 99.95th=[ 529], 00:21:19.624 | 99.99th=[ 529] 00:21:19.624 bw ( KiB/s): min= 4096, max= 4096, per=29.31%, avg=4096.00, stdev= 0.00, samples=1 00:21:19.624 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:19.624 lat (usec) : 250=20.08%, 500=75.80%, 750=0.19% 00:21:19.624 lat (msec) : 50=3.94% 00:21:19.624 cpu : usr=0.49%, sys=0.79%, ctx=535, majf=0, minf=1 00:21:19.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:19.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.624 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:19.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:19.624 job1: (groupid=0, jobs=1): err= 0: pid=273177: Thu Jul 11 23:34:40 2024 00:21:19.624 read: IOPS=1024, BW=4100KiB/s (4198kB/s)(4104KiB/1001msec) 00:21:19.624 slat (usec): min=6, max=116, avg=15.76, stdev= 7.62 00:21:19.624 clat (usec): min=286, max=41448, avg=536.24, stdev=1801.88 00:21:19.624 lat (usec): min=295, max=41508, avg=552.00, stdev=1802.97 00:21:19.624 clat percentiles (usec): 00:21:19.624 | 1.00th=[ 302], 5.00th=[ 322], 10.00th=[ 334], 20.00th=[ 379], 00:21:19.624 | 30.00th=[ 408], 40.00th=[ 429], 50.00th=[ 449], 60.00th=[ 474], 00:21:19.624 | 70.00th=[ 494], 80.00th=[ 523], 90.00th=[ 570], 95.00th=[ 619], 00:21:19.624 | 99.00th=[ 734], 99.50th=[ 832], 99.90th=[41157], 99.95th=[41681], 00:21:19.624 | 99.99th=[41681] 00:21:19.624 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:21:19.624 slat (usec): min=6, max=107, avg=14.15, stdev= 8.89 00:21:19.624 clat (usec): min=179, max=529, avg=261.98, stdev=66.40 00:21:19.624 lat (usec): min=186, max=548, avg=276.12, stdev=70.49 00:21:19.624 clat percentiles (usec): 00:21:19.624 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:21:19.624 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 239], 60.00th=[ 265], 00:21:19.624 | 70.00th=[ 285], 80.00th=[ 310], 90.00th=[ 367], 95.00th=[ 396], 00:21:19.624 | 99.00th=[ 465], 99.50th=[ 469], 99.90th=[ 494], 99.95th=[ 529], 00:21:19.624 | 99.99th=[ 529] 00:21:19.624 bw ( KiB/s): min= 5920, max= 6368, per=43.97%, avg=6144.00, stdev=316.78, samples=2 00:21:19.624 iops : min= 1480, max= 1592, avg=1536.00, stdev=79.20, samples=2 00:21:19.624 lat (usec) : 250=32.79%, 500=56.21%, 750=10.66%, 1000=0.23% 00:21:19.624 lat (msec) : 4=0.04%, 50=0.08% 00:21:19.624 cpu : usr=1.40%, sys=4.40%, ctx=2565, majf=0, minf=2 00:21:19.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:19.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.624 issued rwts: total=1026,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:19.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:19.624 job2: (groupid=0, jobs=1): err= 0: pid=273178: Thu Jul 11 23:34:40 2024 00:21:19.624 read: IOPS=881, BW=3528KiB/s (3612kB/s)(3616KiB/1025msec) 00:21:19.624 slat (nsec): min=5644, max=72055, avg=17334.64, stdev=6931.00 00:21:19.624 clat (usec): min=320, max=41100, avg=740.42, stdev=2999.97 00:21:19.624 lat (usec): min=335, max=41115, avg=757.75, stdev=2999.87 00:21:19.624 clat percentiles (usec): 00:21:19.624 | 1.00th=[ 355], 5.00th=[ 392], 10.00th=[ 416], 20.00th=[ 441], 00:21:19.624 | 30.00th=[ 461], 40.00th=[ 486], 50.00th=[ 510], 60.00th=[ 529], 00:21:19.624 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 627], 95.00th=[ 676], 00:21:19.624 | 99.00th=[ 791], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:21:19.624 | 99.99th=[41157] 00:21:19.624 write: IOPS=999, BW=3996KiB/s (4092kB/s)(4096KiB/1025msec); 0 zone resets 00:21:19.624 slat (nsec): min=6694, max=56417, avg=15208.51, stdev=6337.17 00:21:19.624 clat (usec): min=198, max=534, avg=307.55, stdev=78.39 00:21:19.624 lat (usec): min=206, max=551, avg=322.76, stdev=80.20 00:21:19.624 clat percentiles (usec): 00:21:19.624 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 237], 00:21:19.624 | 30.00th=[ 260], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 302], 00:21:19.624 | 70.00th=[ 326], 80.00th=[ 400], 90.00th=[ 433], 95.00th=[ 449], 00:21:19.624 | 99.00th=[ 494], 99.50th=[ 519], 99.90th=[ 537], 99.95th=[ 537], 00:21:19.624 | 99.99th=[ 537] 00:21:19.624 bw ( KiB/s): min= 2896, max= 5296, per=29.31%, avg=4096.00, stdev=1697.06, samples=2 00:21:19.624 iops : min= 724, max= 1324, avg=1024.00, stdev=424.26, samples=2 00:21:19.624 lat (usec) : 250=14.68%, 500=59.49%, 750=25.00%, 1000=0.47% 00:21:19.624 lat (msec) : 2=0.05%, 4=0.05%, 50=0.26% 00:21:19.624 cpu : usr=1.46%, sys=3.42%, ctx=1928, majf=0, minf=1 00:21:19.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:19.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.624 issued rwts: total=904,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:19.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:19.624 job3: (groupid=0, jobs=1): err= 0: pid=273179: Thu Jul 11 23:34:40 2024 00:21:19.624 read: IOPS=20, BW=81.9KiB/s (83.8kB/s)(84.0KiB/1026msec) 00:21:19.624 slat (nsec): min=8411, max=40495, avg=19922.00, stdev=8686.26 00:21:19.624 clat (usec): min=40771, max=41139, avg=40975.07, stdev=81.53 00:21:19.624 lat (usec): min=40805, max=41177, avg=40994.99, stdev=78.85 00:21:19.624 clat percentiles (usec): 00:21:19.624 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:21:19.624 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:19.624 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:19.624 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:19.624 | 99.99th=[41157] 00:21:19.624 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:21:19.624 slat (nsec): min=7668, max=32776, avg=14190.34, stdev=5086.96 00:21:19.624 clat (usec): min=199, max=2520, avg=303.65, stdev=116.53 00:21:19.624 lat (usec): min=208, max=2538, avg=317.84, stdev=116.87 00:21:19.624 clat percentiles (usec): 00:21:19.624 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 237], 00:21:19.624 | 30.00th=[ 260], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 310], 00:21:19.624 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 388], 95.00th=[ 416], 00:21:19.624 | 99.00th=[ 482], 99.50th=[ 498], 99.90th=[ 2507], 99.95th=[ 2507], 00:21:19.624 | 99.99th=[ 2507] 00:21:19.624 bw ( KiB/s): min= 4096, max= 4096, per=29.31%, avg=4096.00, stdev= 0.00, samples=1 00:21:19.624 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:19.624 lat (usec) : 250=26.64%, 500=69.04%, 750=0.19% 00:21:19.624 lat (msec) : 4=0.19%, 50=3.94% 00:21:19.624 cpu : usr=0.39%, sys=0.68%, ctx=534, majf=0, minf=1 00:21:19.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:19.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.625 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:19.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:19.625 00:21:19.625 Run status group 0 (all jobs): 00:21:19.625 READ: bw=7688KiB/s (7873kB/s), 81.9KiB/s-4100KiB/s (83.8kB/s-4198kB/s), io=7888KiB (8077kB), run=1001-1026msec 00:21:19.625 WRITE: bw=13.6MiB/s (14.3MB/s), 1996KiB/s-6138KiB/s (2044kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1026msec 00:21:19.625 00:21:19.625 Disk stats (read/write): 00:21:19.625 nvme0n1: ios=62/512, merge=0/0, ticks=1133/141, in_queue=1274, util=88.57% 00:21:19.625 nvme0n2: ios=1040/1050, merge=0/0, ticks=864/268, in_queue=1132, util=99.90% 00:21:19.625 nvme0n3: ios=874/1024, merge=0/0, ticks=471/301, in_queue=772, util=88.93% 00:21:19.625 nvme0n4: ios=64/512, merge=0/0, ticks=1103/153, in_queue=1256, util=96.85% 00:21:19.625 23:34:40 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:21:19.625 [global] 00:21:19.625 thread=1 00:21:19.625 invalidate=1 00:21:19.625 rw=write 00:21:19.625 time_based=1 00:21:19.625 runtime=1 00:21:19.625 ioengine=libaio 00:21:19.625 direct=1 00:21:19.625 bs=4096 00:21:19.625 iodepth=128 00:21:19.625 norandommap=0 00:21:19.625 numjobs=1 00:21:19.625 00:21:19.625 verify_dump=1 00:21:19.625 verify_backlog=512 00:21:19.625 verify_state_save=0 00:21:19.625 do_verify=1 00:21:19.625 verify=crc32c-intel 00:21:19.625 [job0] 00:21:19.625 filename=/dev/nvme0n1 00:21:19.625 [job1] 00:21:19.625 filename=/dev/nvme0n2 00:21:19.625 [job2] 00:21:19.625 filename=/dev/nvme0n3 00:21:19.625 [job3] 00:21:19.625 filename=/dev/nvme0n4 00:21:19.625 Could not set queue depth (nvme0n1) 00:21:19.625 Could not set queue depth (nvme0n2) 00:21:19.625 Could not set queue depth (nvme0n3) 00:21:19.625 Could not set queue depth (nvme0n4) 00:21:19.886 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:19.886 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:19.886 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:19.886 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:19.886 fio-3.35 00:21:19.886 Starting 4 threads 00:21:21.263 00:21:21.263 job0: (groupid=0, jobs=1): err= 0: pid=273408: Thu Jul 11 23:34:41 2024 00:21:21.263 read: IOPS=3881, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1006msec) 00:21:21.263 slat (usec): min=2, max=7475, avg=124.20, stdev=688.07 00:21:21.263 clat (usec): min=4865, max=27466, avg=15487.74, stdev=3718.49 00:21:21.263 lat (usec): min=4869, max=27487, avg=15611.94, stdev=3751.27 00:21:21.263 clat percentiles (usec): 00:21:21.263 | 1.00th=[ 8848], 5.00th=[10683], 10.00th=[11469], 20.00th=[11731], 00:21:21.263 | 30.00th=[12518], 40.00th=[14091], 50.00th=[15270], 60.00th=[16319], 00:21:21.263 | 70.00th=[17957], 80.00th=[19006], 90.00th=[20317], 95.00th=[21365], 00:21:21.263 | 99.00th=[23725], 99.50th=[25560], 99.90th=[25822], 99.95th=[27395], 00:21:21.263 | 99.99th=[27395] 00:21:21.263 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:21:21.263 slat (usec): min=3, max=12859, avg=119.46, stdev=674.66 00:21:21.263 clat (usec): min=3228, max=33566, avg=16200.87, stdev=4477.61 00:21:21.263 lat (usec): min=3236, max=33577, avg=16320.33, stdev=4507.33 00:21:21.263 clat percentiles (usec): 00:21:21.263 | 1.00th=[ 5276], 5.00th=[10421], 10.00th=[11863], 20.00th=[13042], 00:21:21.263 | 30.00th=[13698], 40.00th=[14353], 50.00th=[15270], 60.00th=[16581], 00:21:21.263 | 70.00th=[16909], 80.00th=[19792], 90.00th=[22414], 95.00th=[24773], 00:21:21.263 | 99.00th=[30016], 99.50th=[30016], 99.90th=[33424], 99.95th=[33424], 00:21:21.263 | 99.99th=[33817] 00:21:21.263 bw ( KiB/s): min=16384, max=16384, per=23.42%, avg=16384.00, stdev= 0.00, samples=2 00:21:21.263 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:21:21.263 lat (msec) : 4=0.21%, 10=3.21%, 20=80.71%, 50=15.86% 00:21:21.263 cpu : usr=3.38%, sys=4.38%, ctx=377, majf=0, minf=1 00:21:21.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:21.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:21.263 issued rwts: total=3905,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:21.263 job1: (groupid=0, jobs=1): err= 0: pid=273410: Thu Jul 11 23:34:41 2024 00:21:21.263 read: IOPS=4135, BW=16.2MiB/s (16.9MB/s)(16.2MiB/1002msec) 00:21:21.263 slat (usec): min=2, max=10466, avg=118.87, stdev=649.15 00:21:21.263 clat (usec): min=642, max=35559, avg=15588.23, stdev=5397.34 00:21:21.263 lat (usec): min=2469, max=35574, avg=15707.09, stdev=5434.14 00:21:21.263 clat percentiles (usec): 00:21:21.263 | 1.00th=[ 7242], 5.00th=[10028], 10.00th=[10683], 20.00th=[11338], 00:21:21.263 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13566], 60.00th=[15533], 00:21:21.263 | 70.00th=[17433], 80.00th=[20317], 90.00th=[23987], 95.00th=[26346], 00:21:21.263 | 99.00th=[30016], 99.50th=[31589], 99.90th=[31589], 99.95th=[32900], 00:21:21.263 | 99.99th=[35390] 00:21:21.263 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:21:21.263 slat (usec): min=3, max=8098, avg=103.31, stdev=554.74 00:21:21.263 clat (usec): min=7312, max=30395, avg=13472.68, stdev=3487.40 00:21:21.263 lat (usec): min=7317, max=30411, avg=13575.99, stdev=3498.80 00:21:21.263 clat percentiles (usec): 00:21:21.263 | 1.00th=[ 8094], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10945], 00:21:21.263 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12518], 60.00th=[13173], 00:21:21.263 | 70.00th=[14222], 80.00th=[15401], 90.00th=[17957], 95.00th=[21890], 00:21:21.263 | 99.00th=[25035], 99.50th=[25297], 99.90th=[27395], 99.95th=[30016], 00:21:21.263 | 99.99th=[30278] 00:21:21.263 bw ( KiB/s): min=16432, max=19800, per=25.90%, avg=18116.00, stdev=2381.54, samples=2 00:21:21.263 iops : min= 4108, max= 4950, avg=4529.00, stdev=595.38, samples=2 00:21:21.263 lat (usec) : 750=0.01% 00:21:21.263 lat (msec) : 4=0.17%, 10=7.32%, 20=78.93%, 50=13.56% 00:21:21.263 cpu : usr=3.50%, sys=5.69%, ctx=424, majf=0, minf=1 00:21:21.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:21.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:21.263 issued rwts: total=4144,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:21.263 job2: (groupid=0, jobs=1): err= 0: pid=273419: Thu Jul 11 23:34:41 2024 00:21:21.263 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:21:21.263 slat (usec): min=3, max=12910, avg=124.83, stdev=738.73 00:21:21.263 clat (usec): min=2835, max=45076, avg=16358.89, stdev=6763.45 00:21:21.263 lat (usec): min=2852, max=45086, avg=16483.72, stdev=6797.73 00:21:21.263 clat percentiles (usec): 00:21:21.263 | 1.00th=[ 5735], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[11994], 00:21:21.263 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[14877], 00:21:21.263 | 70.00th=[17695], 80.00th=[20841], 90.00th=[25822], 95.00th=[30016], 00:21:21.263 | 99.00th=[40109], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:21:21.263 | 99.99th=[44827] 00:21:21.263 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:21:21.263 slat (usec): min=4, max=10297, avg=109.17, stdev=629.57 00:21:21.263 clat (usec): min=892, max=31469, avg=14540.75, stdev=5070.49 00:21:21.263 lat (usec): min=4524, max=37260, avg=14649.92, stdev=5084.39 00:21:21.263 clat percentiles (usec): 00:21:21.263 | 1.00th=[ 5538], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[11600], 00:21:21.263 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13435], 60.00th=[13698], 00:21:21.263 | 70.00th=[14222], 80.00th=[16319], 90.00th=[22414], 95.00th=[27395], 00:21:21.263 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:21:21.263 | 99.99th=[31589] 00:21:21.263 bw ( KiB/s): min=12288, max=20480, per=23.42%, avg=16384.00, stdev=5792.62, samples=2 00:21:21.263 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:21:21.263 lat (usec) : 1000=0.01% 00:21:21.263 lat (msec) : 4=0.02%, 10=9.09%, 20=73.46%, 50=17.41% 00:21:21.263 cpu : usr=4.38%, sys=6.47%, ctx=368, majf=0, minf=1 00:21:21.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:21.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:21.263 issued rwts: total=4096,4108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:21.263 job3: (groupid=0, jobs=1): err= 0: pid=273420: Thu Jul 11 23:34:41 2024 00:21:21.263 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:21:21.263 slat (usec): min=3, max=9088, avg=99.92, stdev=576.92 00:21:21.263 clat (usec): min=6985, max=23965, avg=12951.91, stdev=2162.03 00:21:21.263 lat (usec): min=6999, max=23991, avg=13051.83, stdev=2179.59 00:21:21.263 clat percentiles (usec): 00:21:21.263 | 1.00th=[ 8225], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[11207], 00:21:21.263 | 30.00th=[11731], 40.00th=[12387], 50.00th=[12780], 60.00th=[13304], 00:21:21.263 | 70.00th=[13829], 80.00th=[14746], 90.00th=[15401], 95.00th=[17171], 00:21:21.263 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19792], 99.95th=[21103], 00:21:21.263 | 99.99th=[23987] 00:21:21.263 write: IOPS=4764, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1003msec); 0 zone resets 00:21:21.263 slat (usec): min=4, max=6947, avg=102.03, stdev=549.77 00:21:21.263 clat (usec): min=1332, max=43757, avg=13982.55, stdev=5810.04 00:21:21.263 lat (usec): min=1339, max=43774, avg=14084.58, stdev=5841.67 00:21:21.263 clat percentiles (usec): 00:21:21.263 | 1.00th=[ 6980], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[11207], 00:21:21.263 | 30.00th=[11863], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:21:21.264 | 70.00th=[13566], 80.00th=[14222], 90.00th=[17171], 95.00th=[25822], 00:21:21.264 | 99.00th=[41157], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:21:21.264 | 99.99th=[43779] 00:21:21.264 bw ( KiB/s): min=17440, max=19776, per=26.60%, avg=18608.00, stdev=1651.80, samples=2 00:21:21.264 iops : min= 4360, max= 4944, avg=4652.00, stdev=412.95, samples=2 00:21:21.264 lat (msec) : 2=0.04%, 10=8.33%, 20=87.70%, 50=3.93% 00:21:21.264 cpu : usr=4.99%, sys=8.48%, ctx=444, majf=0, minf=1 00:21:21.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:21.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:21.264 issued rwts: total=4608,4779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:21.264 00:21:21.264 Run status group 0 (all jobs): 00:21:21.264 READ: bw=65.1MiB/s (68.2MB/s), 15.2MiB/s-17.9MiB/s (15.9MB/s-18.8MB/s), io=65.4MiB (68.6MB), run=1002-1006msec 00:21:21.264 WRITE: bw=68.3MiB/s (71.6MB/s), 15.9MiB/s-18.6MiB/s (16.7MB/s-19.5MB/s), io=68.7MiB (72.1MB), run=1002-1006msec 00:21:21.264 00:21:21.264 Disk stats (read/write): 00:21:21.264 nvme0n1: ios=3112/3583, merge=0/0, ticks=16283/17508, in_queue=33791, util=96.69% 00:21:21.264 nvme0n2: ios=3604/3976, merge=0/0, ticks=16540/15220, in_queue=31760, util=85.09% 00:21:21.264 nvme0n3: ios=3623/3610, merge=0/0, ticks=25454/18151, in_queue=43605, util=97.34% 00:21:21.264 nvme0n4: ios=3584/3975, merge=0/0, ticks=22734/27457, in_queue=50191, util=89.34% 00:21:21.264 23:34:41 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:21:21.264 [global] 00:21:21.264 thread=1 00:21:21.264 invalidate=1 00:21:21.264 rw=randwrite 00:21:21.264 time_based=1 00:21:21.264 runtime=1 00:21:21.264 ioengine=libaio 00:21:21.264 direct=1 00:21:21.264 bs=4096 00:21:21.264 iodepth=128 00:21:21.264 norandommap=0 00:21:21.264 numjobs=1 00:21:21.264 00:21:21.264 verify_dump=1 00:21:21.264 verify_backlog=512 00:21:21.264 verify_state_save=0 00:21:21.264 do_verify=1 00:21:21.264 verify=crc32c-intel 00:21:21.264 [job0] 00:21:21.264 filename=/dev/nvme0n1 00:21:21.264 [job1] 00:21:21.264 filename=/dev/nvme0n2 00:21:21.264 [job2] 00:21:21.264 filename=/dev/nvme0n3 00:21:21.264 [job3] 00:21:21.264 filename=/dev/nvme0n4 00:21:21.264 Could not set queue depth (nvme0n1) 00:21:21.264 Could not set queue depth (nvme0n2) 00:21:21.264 Could not set queue depth (nvme0n3) 00:21:21.264 Could not set queue depth (nvme0n4) 00:21:21.264 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:21.264 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:21.264 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:21.264 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:21.264 fio-3.35 00:21:21.264 Starting 4 threads 00:21:22.640 00:21:22.640 job0: (groupid=0, jobs=1): err= 0: pid=273650: Thu Jul 11 23:34:43 2024 00:21:22.640 read: IOPS=1333, BW=5336KiB/s (5464kB/s)(5368KiB/1006msec) 00:21:22.640 slat (usec): min=3, max=59553, avg=414.37, stdev=2888.08 00:21:22.640 clat (usec): min=1055, max=130050, avg=49533.74, stdev=27230.07 00:21:22.640 lat (msec): min=13, max=130, avg=49.95, stdev=27.39 00:21:22.640 clat percentiles (msec): 00:21:22.640 | 1.00th=[ 14], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 28], 00:21:22.640 | 30.00th=[ 32], 40.00th=[ 35], 50.00th=[ 38], 60.00th=[ 42], 00:21:22.640 | 70.00th=[ 51], 80.00th=[ 87], 90.00th=[ 95], 95.00th=[ 96], 00:21:22.640 | 99.00th=[ 110], 99.50th=[ 110], 99.90th=[ 113], 99.95th=[ 131], 00:21:22.640 | 99.99th=[ 131] 00:21:22.640 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:21:22.640 slat (usec): min=4, max=37662, avg=282.87, stdev=1961.44 00:21:22.640 clat (msec): min=14, max=114, avg=37.24, stdev=23.48 00:21:22.640 lat (msec): min=14, max=115, avg=37.52, stdev=23.65 00:21:22.640 clat percentiles (msec): 00:21:22.640 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 23], 00:21:22.640 | 30.00th=[ 25], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 34], 00:21:22.640 | 70.00th=[ 37], 80.00th=[ 41], 90.00th=[ 71], 95.00th=[ 103], 00:21:22.640 | 99.00th=[ 115], 99.50th=[ 115], 99.90th=[ 115], 99.95th=[ 115], 00:21:22.640 | 99.99th=[ 115] 00:21:22.640 bw ( KiB/s): min= 4096, max= 8192, per=12.55%, avg=6144.00, stdev=2896.31, samples=2 00:21:22.640 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:21:22.640 lat (msec) : 2=0.03%, 20=10.11%, 50=68.35%, 100=16.37%, 250=5.14% 00:21:22.640 cpu : usr=1.19%, sys=2.69%, ctx=119, majf=0, minf=13 00:21:22.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:21:22.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:22.640 issued rwts: total=1342,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:22.640 job1: (groupid=0, jobs=1): err= 0: pid=273651: Thu Jul 11 23:34:43 2024 00:21:22.640 read: IOPS=5299, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1002msec) 00:21:22.640 slat (usec): min=2, max=17293, avg=73.40, stdev=521.25 00:21:22.641 clat (usec): min=763, max=52339, avg=10223.64, stdev=4603.78 00:21:22.641 lat (usec): min=840, max=52356, avg=10297.03, stdev=4652.31 00:21:22.641 clat percentiles (usec): 00:21:22.641 | 1.00th=[ 1762], 5.00th=[ 4146], 10.00th=[ 6980], 20.00th=[ 8455], 00:21:22.641 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9765], 00:21:22.641 | 70.00th=[10814], 80.00th=[11994], 90.00th=[13960], 95.00th=[17957], 00:21:22.641 | 99.00th=[37487], 99.50th=[37487], 99.90th=[40109], 99.95th=[40109], 00:21:22.641 | 99.99th=[52167] 00:21:22.641 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:21:22.641 slat (usec): min=3, max=15913, avg=84.15, stdev=513.39 00:21:22.641 clat (usec): min=472, max=71629, avg=12810.43, stdev=10361.01 00:21:22.641 lat (usec): min=496, max=71634, avg=12894.59, stdev=10413.37 00:21:22.641 clat percentiles (usec): 00:21:22.641 | 1.00th=[ 1975], 5.00th=[ 5473], 10.00th=[ 6849], 20.00th=[ 8586], 00:21:22.641 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10159], 00:21:22.641 | 70.00th=[11469], 80.00th=[13960], 90.00th=[17957], 95.00th=[35390], 00:21:22.641 | 99.00th=[64750], 99.50th=[67634], 99.90th=[71828], 99.95th=[71828], 00:21:22.641 | 99.99th=[71828] 00:21:22.641 bw ( KiB/s): min=19128, max=25928, per=46.02%, avg=22528.00, stdev=4808.33, samples=2 00:21:22.641 iops : min= 4782, max= 6482, avg=5632.00, stdev=1202.08, samples=2 00:21:22.641 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.27% 00:21:22.641 lat (msec) : 2=1.32%, 4=2.22%, 10=55.80%, 20=34.05%, 50=4.96% 00:21:22.641 lat (msec) : 100=1.32% 00:21:22.641 cpu : usr=3.30%, sys=7.49%, ctx=628, majf=0, minf=5 00:21:22.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:22.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:22.641 issued rwts: total=5310,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:22.641 job2: (groupid=0, jobs=1): err= 0: pid=273652: Thu Jul 11 23:34:43 2024 00:21:22.641 read: IOPS=3377, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1005msec) 00:21:22.641 slat (usec): min=3, max=13989, avg=127.07, stdev=822.85 00:21:22.641 clat (usec): min=2138, max=42451, avg=15723.83, stdev=5403.93 00:21:22.641 lat (usec): min=7109, max=42458, avg=15850.90, stdev=5449.79 00:21:22.641 clat percentiles (usec): 00:21:22.641 | 1.00th=[ 7439], 5.00th=[10814], 10.00th=[11338], 20.00th=[12125], 00:21:22.641 | 30.00th=[12911], 40.00th=[13698], 50.00th=[14484], 60.00th=[15270], 00:21:22.641 | 70.00th=[16188], 80.00th=[17695], 90.00th=[20841], 95.00th=[28705], 00:21:22.641 | 99.00th=[35390], 99.50th=[38011], 99.90th=[42206], 99.95th=[42206], 00:21:22.641 | 99.99th=[42206] 00:21:22.641 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:21:22.641 slat (usec): min=4, max=11074, avg=148.50, stdev=731.04 00:21:22.641 clat (usec): min=1665, max=64686, avg=20681.66, stdev=13963.64 00:21:22.641 lat (usec): min=1676, max=64694, avg=20830.16, stdev=14049.27 00:21:22.641 clat percentiles (usec): 00:21:22.641 | 1.00th=[ 3720], 5.00th=[ 6718], 10.00th=[ 7373], 20.00th=[ 8979], 00:21:22.641 | 30.00th=[10945], 40.00th=[12387], 50.00th=[14222], 60.00th=[18220], 00:21:22.641 | 70.00th=[28443], 80.00th=[32637], 90.00th=[40633], 95.00th=[52167], 00:21:22.641 | 99.00th=[60556], 99.50th=[61080], 99.90th=[64750], 99.95th=[64750], 00:21:22.641 | 99.99th=[64750] 00:21:22.641 bw ( KiB/s): min=12288, max=16384, per=29.29%, avg=14336.00, stdev=2896.31, samples=2 00:21:22.641 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:21:22.641 lat (msec) : 2=0.14%, 4=0.40%, 10=12.71%, 20=60.78%, 50=23.14% 00:21:22.641 lat (msec) : 100=2.82% 00:21:22.641 cpu : usr=3.59%, sys=5.38%, ctx=331, majf=0, minf=11 00:21:22.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:22.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:22.641 issued rwts: total=3394,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:22.641 job3: (groupid=0, jobs=1): err= 0: pid=273653: Thu Jul 11 23:34:43 2024 00:21:22.641 read: IOPS=1669, BW=6677KiB/s (6837kB/s)(6984KiB/1046msec) 00:21:22.641 slat (usec): min=2, max=55390, avg=322.20, stdev=2161.00 00:21:22.641 clat (usec): min=8977, max=93638, avg=39470.82, stdev=19465.11 00:21:22.641 lat (msec): min=8, max=104, avg=39.79, stdev=19.58 00:21:22.641 clat percentiles (usec): 00:21:22.641 | 1.00th=[10945], 5.00th=[17957], 10.00th=[20055], 20.00th=[23462], 00:21:22.641 | 30.00th=[24249], 40.00th=[24511], 50.00th=[32113], 60.00th=[44303], 00:21:22.641 | 70.00th=[52691], 80.00th=[58459], 90.00th=[67634], 95.00th=[73925], 00:21:22.641 | 99.00th=[82314], 99.50th=[92799], 99.90th=[92799], 99.95th=[93848], 00:21:22.641 | 99.99th=[93848] 00:21:22.641 write: IOPS=1957, BW=7832KiB/s (8020kB/s)(8192KiB/1046msec); 0 zone resets 00:21:22.641 slat (usec): min=4, max=11573, avg=208.43, stdev=955.77 00:21:22.641 clat (usec): min=8577, max=87472, avg=30738.99, stdev=22005.13 00:21:22.641 lat (usec): min=9572, max=87478, avg=30947.41, stdev=22114.44 00:21:22.641 clat percentiles (usec): 00:21:22.641 | 1.00th=[ 9765], 5.00th=[10683], 10.00th=[11994], 20.00th=[12911], 00:21:22.641 | 30.00th=[14222], 40.00th=[17433], 50.00th=[21103], 60.00th=[28443], 00:21:22.641 | 70.00th=[32113], 80.00th=[56361], 90.00th=[69731], 95.00th=[76022], 00:21:22.641 | 99.00th=[83362], 99.50th=[84411], 99.90th=[87557], 99.95th=[87557], 00:21:22.641 | 99.99th=[87557] 00:21:22.641 bw ( KiB/s): min= 7280, max= 9104, per=16.74%, avg=8192.00, stdev=1289.76, samples=2 00:21:22.641 iops : min= 1820, max= 2276, avg=2048.00, stdev=322.44, samples=2 00:21:22.641 lat (msec) : 10=1.34%, 20=28.23%, 50=43.44%, 100=26.99% 00:21:22.641 cpu : usr=1.24%, sys=2.78%, ctx=232, majf=0, minf=21 00:21:22.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:21:22.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:22.641 issued rwts: total=1746,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:22.641 00:21:22.641 Run status group 0 (all jobs): 00:21:22.641 READ: bw=44.0MiB/s (46.2MB/s), 5336KiB/s-20.7MiB/s (5464kB/s-21.7MB/s), io=46.1MiB (48.3MB), run=1002-1046msec 00:21:22.641 WRITE: bw=47.8MiB/s (50.1MB/s), 6107KiB/s-22.0MiB/s (6254kB/s-23.0MB/s), io=50.0MiB (52.4MB), run=1002-1046msec 00:21:22.641 00:21:22.641 Disk stats (read/write): 00:21:22.641 nvme0n1: ios=1049/1536, merge=0/0, ticks=15117/17347, in_queue=32464, util=96.39% 00:21:22.641 nvme0n2: ios=4135/4223, merge=0/0, ticks=28938/37106, in_queue=66044, util=98.67% 00:21:22.641 nvme0n3: ios=2242/2560, merge=0/0, ticks=36072/62132, in_queue=98204, util=87.01% 00:21:22.641 nvme0n4: ios=1787/2048, merge=0/0, ticks=21282/18521, in_queue=39803, util=98.34% 00:21:22.641 23:34:43 -- target/fio.sh@55 -- # sync 00:21:22.641 23:34:43 -- target/fio.sh@59 -- # fio_pid=273791 00:21:22.641 23:34:43 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:21:22.641 23:34:43 -- target/fio.sh@61 -- # sleep 3 00:21:22.641 [global] 00:21:22.641 thread=1 00:21:22.641 invalidate=1 00:21:22.641 rw=read 00:21:22.641 time_based=1 00:21:22.641 runtime=10 00:21:22.641 ioengine=libaio 00:21:22.641 direct=1 00:21:22.641 bs=4096 00:21:22.641 iodepth=1 00:21:22.641 norandommap=1 00:21:22.641 numjobs=1 00:21:22.641 00:21:22.641 [job0] 00:21:22.641 filename=/dev/nvme0n1 00:21:22.641 [job1] 00:21:22.641 filename=/dev/nvme0n2 00:21:22.641 [job2] 00:21:22.641 filename=/dev/nvme0n3 00:21:22.641 [job3] 00:21:22.641 filename=/dev/nvme0n4 00:21:22.641 Could not set queue depth (nvme0n1) 00:21:22.641 Could not set queue depth (nvme0n2) 00:21:22.641 Could not set queue depth (nvme0n3) 00:21:22.641 Could not set queue depth (nvme0n4) 00:21:22.900 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:22.900 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:22.900 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:22.900 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:22.900 fio-3.35 00:21:22.900 Starting 4 threads 00:21:25.481 23:34:46 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:21:26.045 23:34:46 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:21:26.045 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=13688832, buflen=4096 00:21:26.045 fio: pid=274014, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:26.303 23:34:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:26.303 23:34:47 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:21:26.303 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=4673536, buflen=4096 00:21:26.303 fio: pid=274013, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:26.565 23:34:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:26.565 23:34:47 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:21:26.565 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1359872, buflen=4096 00:21:26.565 fio: pid=274005, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:26.822 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=10547200, buflen=4096 00:21:26.822 fio: pid=274006, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:26.822 23:34:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:26.822 23:34:47 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:26.822 00:21:26.822 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=274005: Thu Jul 11 23:34:47 2024 00:21:26.822 read: IOPS=93, BW=373KiB/s (382kB/s)(1328KiB/3560msec) 00:21:26.822 slat (usec): min=6, max=26827, avg=128.41, stdev=1604.45 00:21:26.822 clat (usec): min=296, max=45629, avg=10520.02, stdev=17646.63 00:21:26.822 lat (usec): min=305, max=68954, avg=10648.76, stdev=17927.57 00:21:26.822 clat percentiles (usec): 00:21:26.822 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 347], 00:21:26.822 | 30.00th=[ 359], 40.00th=[ 392], 50.00th=[ 449], 60.00th=[ 545], 00:21:26.822 | 70.00th=[ 611], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:26.822 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:21:26.822 | 99.99th=[45876] 00:21:26.822 bw ( KiB/s): min= 103, max= 1128, per=4.06%, avg=311.83, stdev=400.84, samples=6 00:21:26.822 iops : min= 25, max= 282, avg=77.83, stdev=100.29, samples=6 00:21:26.822 lat (usec) : 500=56.16%, 750=18.02%, 1000=0.90% 00:21:26.822 lat (msec) : 50=24.62% 00:21:26.822 cpu : usr=0.14%, sys=0.08%, ctx=336, majf=0, minf=1 00:21:26.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:26.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.822 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.822 issued rwts: total=333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:26.822 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=274006: Thu Jul 11 23:34:47 2024 00:21:26.822 read: IOPS=667, BW=2668KiB/s (2732kB/s)(10.1MiB/3861msec) 00:21:26.822 slat (usec): min=5, max=28683, avg=39.44, stdev=719.73 00:21:26.822 clat (usec): min=296, max=41994, avg=1452.78, stdev=6516.74 00:21:26.822 lat (usec): min=305, max=47120, avg=1492.23, stdev=6566.89 00:21:26.822 clat percentiles (usec): 00:21:26.822 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:21:26.822 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 379], 00:21:26.822 | 70.00th=[ 408], 80.00th=[ 441], 90.00th=[ 502], 95.00th=[ 570], 00:21:26.822 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:21:26.822 | 99.99th=[42206] 00:21:26.822 bw ( KiB/s): min= 96, max= 6900, per=25.93%, avg=1985.00, stdev=2616.42, samples=7 00:21:26.822 iops : min= 24, max= 1725, avg=496.14, stdev=654.10, samples=7 00:21:26.822 lat (usec) : 500=89.67%, 750=7.53%, 1000=0.08% 00:21:26.822 lat (msec) : 2=0.04%, 50=2.64% 00:21:26.822 cpu : usr=0.52%, sys=0.78%, ctx=2581, majf=0, minf=1 00:21:26.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:26.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.822 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.822 issued rwts: total=2576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:26.822 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=274013: Thu Jul 11 23:34:47 2024 00:21:26.822 read: IOPS=349, BW=1398KiB/s (1431kB/s)(4564KiB/3265msec) 00:21:26.822 slat (nsec): min=5702, max=42515, avg=13870.49, stdev=5385.50 00:21:26.822 clat (usec): min=301, max=41968, avg=2824.23, stdev=9544.40 00:21:26.822 lat (usec): min=307, max=41983, avg=2838.10, stdev=9545.57 00:21:26.822 clat percentiles (usec): 00:21:26.822 | 1.00th=[ 310], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 363], 00:21:26.822 | 30.00th=[ 392], 40.00th=[ 424], 50.00th=[ 449], 60.00th=[ 469], 00:21:26.822 | 70.00th=[ 490], 80.00th=[ 519], 90.00th=[ 586], 95.00th=[41157], 00:21:26.822 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:21:26.822 | 99.99th=[42206] 00:21:26.822 bw ( KiB/s): min= 96, max= 3752, per=14.16%, avg=1084.67, stdev=1472.91, samples=6 00:21:26.822 iops : min= 24, max= 938, avg=271.17, stdev=368.23, samples=6 00:21:26.822 lat (usec) : 500=74.26%, 750=19.44%, 1000=0.35% 00:21:26.822 lat (msec) : 50=5.87% 00:21:26.822 cpu : usr=0.06%, sys=0.67%, ctx=1144, majf=0, minf=1 00:21:26.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:26.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.822 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.822 issued rwts: total=1142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:26.822 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=274014: Thu Jul 11 23:34:47 2024 00:21:26.823 read: IOPS=1144, BW=4575KiB/s (4685kB/s)(13.1MiB/2922msec) 00:21:26.823 slat (nsec): min=4907, max=47443, avg=10420.18, stdev=5607.23 00:21:26.823 clat (usec): min=285, max=41489, avg=854.40, stdev=4534.60 00:21:26.823 lat (usec): min=290, max=41505, avg=864.82, stdev=4535.54 00:21:26.823 clat percentiles (usec): 00:21:26.823 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 310], 00:21:26.823 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:21:26.823 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 392], 95.00th=[ 474], 00:21:26.823 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:21:26.823 | 99.99th=[41681] 00:21:26.823 bw ( KiB/s): min= 95, max=11280, per=69.53%, avg=5323.00, stdev=5301.83, samples=5 00:21:26.823 iops : min= 23, max= 2820, avg=1330.60, stdev=1325.64, samples=5 00:21:26.823 lat (usec) : 500=95.90%, 750=2.78%, 1000=0.03% 00:21:26.823 lat (msec) : 50=1.26% 00:21:26.823 cpu : usr=0.38%, sys=1.47%, ctx=3344, majf=0, minf=1 00:21:26.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:26.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.823 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.823 issued rwts: total=3343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:26.823 00:21:26.823 Run status group 0 (all jobs): 00:21:26.823 READ: bw=7656KiB/s (7840kB/s), 373KiB/s-4575KiB/s (382kB/s-4685kB/s), io=28.9MiB (30.3MB), run=2922-3861msec 00:21:26.823 00:21:26.823 Disk stats (read/write): 00:21:26.823 nvme0n1: ios=325/0, merge=0/0, ticks=3281/0, in_queue=3281, util=94.74% 00:21:26.823 nvme0n2: ios=2567/0, merge=0/0, ticks=3718/0, in_queue=3718, util=94.98% 00:21:26.823 nvme0n3: ios=1022/0, merge=0/0, ticks=3946/0, in_queue=3946, util=99.94% 00:21:26.823 nvme0n4: ios=3339/0, merge=0/0, ticks=2743/0, in_queue=2743, util=96.71% 00:21:27.081 23:34:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:27.081 23:34:47 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:27.338 23:34:48 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:27.338 23:34:48 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:27.901 23:34:48 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:27.901 23:34:48 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:28.159 23:34:48 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:28.159 23:34:48 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:28.416 23:34:49 -- target/fio.sh@69 -- # fio_status=0 00:21:28.416 23:34:49 -- target/fio.sh@70 -- # wait 273791 00:21:28.416 23:34:49 -- target/fio.sh@70 -- # fio_status=4 00:21:28.416 23:34:49 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:28.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:28.673 23:34:49 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:28.673 23:34:49 -- common/autotest_common.sh@1198 -- # local i=0 00:21:28.673 23:34:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:28.673 23:34:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:28.673 23:34:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:28.673 23:34:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:28.673 23:34:49 -- common/autotest_common.sh@1210 -- # return 0 00:21:28.673 23:34:49 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:28.673 23:34:49 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:28.673 nvmf hotplug test: fio failed as expected 00:21:28.673 23:34:49 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:28.931 23:34:49 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:28.931 23:34:49 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:28.931 23:34:49 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:28.931 23:34:49 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:28.931 23:34:49 -- target/fio.sh@91 -- # nvmftestfini 00:21:28.931 23:34:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:28.931 23:34:49 -- nvmf/common.sh@116 -- # sync 00:21:28.931 23:34:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:28.931 23:34:49 -- nvmf/common.sh@119 -- # set +e 00:21:28.931 23:34:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:28.931 23:34:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:28.931 rmmod nvme_tcp 00:21:28.931 rmmod nvme_fabrics 00:21:28.931 rmmod nvme_keyring 00:21:28.931 23:34:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:28.931 23:34:49 -- nvmf/common.sh@123 -- # set -e 00:21:28.931 23:34:49 -- nvmf/common.sh@124 -- # return 0 00:21:28.931 23:34:49 -- nvmf/common.sh@477 -- # '[' -n 271570 ']' 00:21:28.931 23:34:49 -- nvmf/common.sh@478 -- # killprocess 271570 00:21:28.931 23:34:49 -- common/autotest_common.sh@926 -- # '[' -z 271570 ']' 00:21:28.931 23:34:49 -- common/autotest_common.sh@930 -- # kill -0 271570 00:21:28.931 23:34:49 -- common/autotest_common.sh@931 -- # uname 00:21:28.931 23:34:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:28.931 23:34:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 271570 00:21:28.931 23:34:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:28.931 23:34:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:28.931 23:34:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 271570' 00:21:28.931 killing process with pid 271570 00:21:28.931 23:34:49 -- common/autotest_common.sh@945 -- # kill 271570 00:21:28.931 23:34:49 -- common/autotest_common.sh@950 -- # wait 271570 00:21:29.189 23:34:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:29.189 23:34:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:29.189 23:34:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:29.189 23:34:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:29.189 23:34:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:29.189 23:34:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.189 23:34:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.189 23:34:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.716 23:34:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:31.716 00:21:31.716 real 0m27.544s 00:21:31.716 user 1m38.430s 00:21:31.716 sys 0m7.287s 00:21:31.716 23:34:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:31.716 23:34:52 -- common/autotest_common.sh@10 -- # set +x 00:21:31.716 ************************************ 00:21:31.716 END TEST nvmf_fio_target 00:21:31.716 ************************************ 00:21:31.716 23:34:52 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:31.716 23:34:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:31.716 23:34:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:31.716 23:34:52 -- common/autotest_common.sh@10 -- # set +x 00:21:31.716 ************************************ 00:21:31.716 START TEST nvmf_bdevio 00:21:31.716 ************************************ 00:21:31.716 23:34:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:31.716 * Looking for test storage... 00:21:31.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:31.716 23:34:52 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.716 23:34:52 -- nvmf/common.sh@7 -- # uname -s 00:21:31.716 23:34:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.716 23:34:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.716 23:34:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.716 23:34:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.716 23:34:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.716 23:34:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.716 23:34:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.716 23:34:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.716 23:34:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.716 23:34:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.716 23:34:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:31.716 23:34:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:31.716 23:34:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.716 23:34:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.716 23:34:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.716 23:34:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.716 23:34:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.716 23:34:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.716 23:34:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.716 23:34:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.716 23:34:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.716 23:34:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.716 23:34:52 -- paths/export.sh@5 -- # export PATH 00:21:31.716 23:34:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.716 23:34:52 -- nvmf/common.sh@46 -- # : 0 00:21:31.716 23:34:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:31.716 23:34:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:31.716 23:34:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:31.716 23:34:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.716 23:34:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.716 23:34:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:31.716 23:34:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:31.716 23:34:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:31.716 23:34:52 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:31.716 23:34:52 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:31.716 23:34:52 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:31.716 23:34:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:31.716 23:34:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.716 23:34:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:31.716 23:34:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:31.716 23:34:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:31.716 23:34:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.716 23:34:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.716 23:34:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.717 23:34:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:31.717 23:34:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:31.717 23:34:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:31.717 23:34:52 -- common/autotest_common.sh@10 -- # set +x 00:21:34.252 23:34:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:34.252 23:34:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:34.252 23:34:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:34.252 23:34:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:34.252 23:34:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:34.252 23:34:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:34.252 23:34:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:34.252 23:34:54 -- nvmf/common.sh@294 -- # net_devs=() 00:21:34.252 23:34:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:34.252 23:34:54 -- nvmf/common.sh@295 -- # e810=() 00:21:34.252 23:34:54 -- nvmf/common.sh@295 -- # local -ga e810 00:21:34.252 23:34:54 -- nvmf/common.sh@296 -- # x722=() 00:21:34.252 23:34:54 -- nvmf/common.sh@296 -- # local -ga x722 00:21:34.252 23:34:54 -- nvmf/common.sh@297 -- # mlx=() 00:21:34.252 23:34:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:34.252 23:34:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.252 23:34:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.252 23:34:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.252 23:34:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.252 23:34:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.252 23:34:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.252 23:34:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.252 23:34:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.252 23:34:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.252 23:34:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.252 23:34:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.252 23:34:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:34.252 23:34:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:34.252 23:34:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:34.252 23:34:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:34.252 23:34:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:34.252 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:34.252 23:34:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:34.252 23:34:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:34.252 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:34.252 23:34:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:34.252 23:34:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:34.252 23:34:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.252 23:34:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:34.252 23:34:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.252 23:34:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:34.252 Found net devices under 0000:84:00.0: cvl_0_0 00:21:34.252 23:34:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.252 23:34:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:34.252 23:34:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.252 23:34:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:34.252 23:34:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.252 23:34:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:34.252 Found net devices under 0000:84:00.1: cvl_0_1 00:21:34.252 23:34:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.252 23:34:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:34.252 23:34:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:34.252 23:34:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:34.252 23:34:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:34.252 23:34:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.252 23:34:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.252 23:34:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.252 23:34:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:34.252 23:34:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.252 23:34:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.252 23:34:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:34.252 23:34:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.252 23:34:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.252 23:34:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:34.252 23:34:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:34.252 23:34:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.252 23:34:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.252 23:34:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.252 23:34:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.252 23:34:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:34.252 23:34:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.252 23:34:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.252 23:34:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.252 23:34:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:34.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:21:34.252 00:21:34.252 --- 10.0.0.2 ping statistics --- 00:21:34.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.252 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:21:34.252 23:34:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:21:34.252 00:21:34.252 --- 10.0.0.1 ping statistics --- 00:21:34.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.252 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:21:34.253 23:34:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.253 23:34:54 -- nvmf/common.sh@410 -- # return 0 00:21:34.253 23:34:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:34.253 23:34:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.253 23:34:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:34.253 23:34:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:34.253 23:34:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.253 23:34:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:34.253 23:34:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:34.253 23:34:54 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:34.253 23:34:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:34.253 23:34:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:34.253 23:34:54 -- common/autotest_common.sh@10 -- # set +x 00:21:34.253 23:34:54 -- nvmf/common.sh@469 -- # nvmfpid=276713 00:21:34.253 23:34:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:34.253 23:34:54 -- nvmf/common.sh@470 -- # waitforlisten 276713 00:21:34.253 23:34:54 -- common/autotest_common.sh@819 -- # '[' -z 276713 ']' 00:21:34.253 23:34:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.253 23:34:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:34.253 23:34:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.253 23:34:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:34.253 23:34:54 -- common/autotest_common.sh@10 -- # set +x 00:21:34.253 [2024-07-11 23:34:55.045806] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:34.253 [2024-07-11 23:34:55.045915] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.253 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.253 [2024-07-11 23:34:55.159627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:34.511 [2024-07-11 23:34:55.305964] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:34.511 [2024-07-11 23:34:55.306130] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.511 [2024-07-11 23:34:55.306158] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.511 [2024-07-11 23:34:55.306174] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.511 [2024-07-11 23:34:55.306258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:34.511 [2024-07-11 23:34:55.306295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:34.511 [2024-07-11 23:34:55.306344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:34.511 [2024-07-11 23:34:55.306347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.446 23:34:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:35.446 23:34:56 -- common/autotest_common.sh@852 -- # return 0 00:21:35.446 23:34:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:35.446 23:34:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:35.446 23:34:56 -- common/autotest_common.sh@10 -- # set +x 00:21:35.446 23:34:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.446 23:34:56 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.446 23:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.446 23:34:56 -- common/autotest_common.sh@10 -- # set +x 00:21:35.705 [2024-07-11 23:34:56.400451] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.705 23:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.705 23:34:56 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:35.705 23:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.705 23:34:56 -- common/autotest_common.sh@10 -- # set +x 00:21:35.705 Malloc0 00:21:35.705 23:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.705 23:34:56 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:35.705 23:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.705 23:34:56 -- common/autotest_common.sh@10 -- # set +x 00:21:35.705 23:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.705 23:34:56 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:35.705 23:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.705 23:34:56 -- common/autotest_common.sh@10 -- # set +x 00:21:35.706 23:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.706 23:34:56 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.706 23:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.706 23:34:56 -- common/autotest_common.sh@10 -- # set +x 00:21:35.706 [2024-07-11 23:34:56.483909] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.706 23:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.706 23:34:56 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:35.706 23:34:56 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:35.706 23:34:56 -- nvmf/common.sh@520 -- # config=() 00:21:35.706 23:34:56 -- nvmf/common.sh@520 -- # local subsystem config 00:21:35.706 23:34:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:35.706 23:34:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:35.706 { 00:21:35.706 "params": { 00:21:35.706 "name": "Nvme$subsystem", 00:21:35.706 "trtype": "$TEST_TRANSPORT", 00:21:35.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.706 "adrfam": "ipv4", 00:21:35.706 "trsvcid": "$NVMF_PORT", 00:21:35.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.706 "hdgst": ${hdgst:-false}, 00:21:35.706 "ddgst": ${ddgst:-false} 00:21:35.706 }, 00:21:35.706 "method": "bdev_nvme_attach_controller" 00:21:35.706 } 00:21:35.706 EOF 00:21:35.706 )") 00:21:35.706 23:34:56 -- nvmf/common.sh@542 -- # cat 00:21:35.706 23:34:56 -- nvmf/common.sh@544 -- # jq . 00:21:35.706 23:34:56 -- nvmf/common.sh@545 -- # IFS=, 00:21:35.706 23:34:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:35.706 "params": { 00:21:35.706 "name": "Nvme1", 00:21:35.706 "trtype": "tcp", 00:21:35.706 "traddr": "10.0.0.2", 00:21:35.706 "adrfam": "ipv4", 00:21:35.706 "trsvcid": "4420", 00:21:35.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:35.706 "hdgst": false, 00:21:35.706 "ddgst": false 00:21:35.706 }, 00:21:35.706 "method": "bdev_nvme_attach_controller" 00:21:35.706 }' 00:21:35.706 [2024-07-11 23:34:56.544844] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:35.706 [2024-07-11 23:34:56.544942] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276971 ] 00:21:35.706 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.706 [2024-07-11 23:34:56.622417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:35.965 [2024-07-11 23:34:56.718176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.966 [2024-07-11 23:34:56.718207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.966 [2024-07-11 23:34:56.718210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.224 [2024-07-11 23:34:57.058384] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:36.224 [2024-07-11 23:34:57.058431] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:36.224 I/O targets: 00:21:36.224 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:36.224 00:21:36.224 00:21:36.224 CUnit - A unit testing framework for C - Version 2.1-3 00:21:36.224 http://cunit.sourceforge.net/ 00:21:36.224 00:21:36.224 00:21:36.224 Suite: bdevio tests on: Nvme1n1 00:21:36.224 Test: blockdev write read block ...passed 00:21:36.224 Test: blockdev write zeroes read block ...passed 00:21:36.224 Test: blockdev write zeroes read no split ...passed 00:21:36.482 Test: blockdev write zeroes read split ...passed 00:21:36.482 Test: blockdev write zeroes read split partial ...passed 00:21:36.482 Test: blockdev reset ...[2024-07-11 23:34:57.266049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.482 [2024-07-11 23:34:57.266163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d58740 (9): Bad file descriptor 00:21:36.482 [2024-07-11 23:34:57.328361] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:36.482 passed 00:21:36.482 Test: blockdev write read 8 blocks ...passed 00:21:36.482 Test: blockdev write read size > 128k ...passed 00:21:36.483 Test: blockdev write read invalid size ...passed 00:21:36.483 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:36.483 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:36.483 Test: blockdev write read max offset ...passed 00:21:36.742 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:36.742 Test: blockdev writev readv 8 blocks ...passed 00:21:36.742 Test: blockdev writev readv 30 x 1block ...passed 00:21:36.742 Test: blockdev writev readv block ...passed 00:21:36.742 Test: blockdev writev readv size > 128k ...passed 00:21:36.742 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:36.742 Test: blockdev comparev and writev ...[2024-07-11 23:34:57.542788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.742 [2024-07-11 23:34:57.542825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:36.742 [2024-07-11 23:34:57.542848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.742 [2024-07-11 23:34:57.542865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:36.742 [2024-07-11 23:34:57.543303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.742 [2024-07-11 23:34:57.543328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:36.742 [2024-07-11 23:34:57.543349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.742 [2024-07-11 23:34:57.543365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:36.742 [2024-07-11 23:34:57.543787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.742 [2024-07-11 23:34:57.543810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:36.742 [2024-07-11 23:34:57.543830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.742 [2024-07-11 23:34:57.543846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:36.742 [2024-07-11 23:34:57.544262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.742 [2024-07-11 23:34:57.544286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:36.742 [2024-07-11 23:34:57.544306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:36.742 [2024-07-11 23:34:57.544321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:36.742 passed 00:21:36.742 Test: blockdev nvme passthru rw ...passed 00:21:36.742 Test: blockdev nvme passthru vendor specific ...[2024-07-11 23:34:57.626550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:36.742 [2024-07-11 23:34:57.626576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:36.742 [2024-07-11 23:34:57.626808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:36.742 [2024-07-11 23:34:57.626831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:36.742 [2024-07-11 23:34:57.627026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:36.742 [2024-07-11 23:34:57.627049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:36.742 [2024-07-11 23:34:57.627269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:36.742 [2024-07-11 23:34:57.627292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:36.742 passed 00:21:36.742 Test: blockdev nvme admin passthru ...passed 00:21:36.742 Test: blockdev copy ...passed 00:21:36.742 00:21:36.742 Run Summary: Type Total Ran Passed Failed Inactive 00:21:36.742 suites 1 1 n/a 0 0 00:21:36.742 tests 23 23 23 0 0 00:21:36.742 asserts 152 152 152 0 n/a 00:21:36.742 00:21:36.742 Elapsed time = 1.262 seconds 00:21:37.001 23:34:57 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:37.001 23:34:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:37.001 23:34:57 -- common/autotest_common.sh@10 -- # set +x 00:21:37.001 23:34:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:37.001 23:34:57 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:37.001 23:34:57 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:37.001 23:34:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:37.001 23:34:57 -- nvmf/common.sh@116 -- # sync 00:21:37.001 23:34:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:37.001 23:34:57 -- nvmf/common.sh@119 -- # set +e 00:21:37.001 23:34:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:37.001 23:34:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:37.001 rmmod nvme_tcp 00:21:37.001 rmmod nvme_fabrics 00:21:37.001 rmmod nvme_keyring 00:21:37.261 23:34:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:37.261 23:34:57 -- nvmf/common.sh@123 -- # set -e 00:21:37.261 23:34:57 -- nvmf/common.sh@124 -- # return 0 00:21:37.261 23:34:57 -- nvmf/common.sh@477 -- # '[' -n 276713 ']' 00:21:37.261 23:34:57 -- nvmf/common.sh@478 -- # killprocess 276713 00:21:37.261 23:34:57 -- common/autotest_common.sh@926 -- # '[' -z 276713 ']' 00:21:37.261 23:34:57 -- common/autotest_common.sh@930 -- # kill -0 276713 00:21:37.261 23:34:57 -- common/autotest_common.sh@931 -- # uname 00:21:37.261 23:34:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:37.261 23:34:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 276713 00:21:37.261 23:34:58 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:21:37.261 23:34:58 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:21:37.261 23:34:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 276713' 00:21:37.261 killing process with pid 276713 00:21:37.261 23:34:58 -- common/autotest_common.sh@945 -- # kill 276713 00:21:37.261 23:34:58 -- common/autotest_common.sh@950 -- # wait 276713 00:21:37.520 23:34:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:37.520 23:34:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:37.520 23:34:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:37.520 23:34:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.520 23:34:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:37.520 23:34:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.520 23:34:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.520 23:34:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.059 23:35:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:40.059 00:21:40.059 real 0m8.297s 00:21:40.059 user 0m16.106s 00:21:40.059 sys 0m2.852s 00:21:40.059 23:35:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.059 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:21:40.059 ************************************ 00:21:40.060 END TEST nvmf_bdevio 00:21:40.060 ************************************ 00:21:40.060 23:35:00 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:21:40.060 23:35:00 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:40.060 23:35:00 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:21:40.060 23:35:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:40.060 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:21:40.060 ************************************ 00:21:40.060 START TEST nvmf_bdevio_no_huge 00:21:40.060 ************************************ 00:21:40.060 23:35:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:40.060 * Looking for test storage... 00:21:40.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:40.060 23:35:00 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.060 23:35:00 -- nvmf/common.sh@7 -- # uname -s 00:21:40.060 23:35:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.060 23:35:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.060 23:35:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.060 23:35:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.060 23:35:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.060 23:35:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.060 23:35:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.060 23:35:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.060 23:35:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.060 23:35:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.060 23:35:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:40.060 23:35:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:40.060 23:35:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.060 23:35:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.060 23:35:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.060 23:35:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.060 23:35:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.060 23:35:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.060 23:35:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.060 23:35:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.060 23:35:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.060 23:35:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.060 23:35:00 -- paths/export.sh@5 -- # export PATH 00:21:40.060 23:35:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.060 23:35:00 -- nvmf/common.sh@46 -- # : 0 00:21:40.060 23:35:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:40.060 23:35:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:40.060 23:35:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:40.060 23:35:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.060 23:35:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.060 23:35:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:40.060 23:35:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:40.060 23:35:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:40.060 23:35:00 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:40.060 23:35:00 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:40.060 23:35:00 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:40.060 23:35:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:40.060 23:35:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.060 23:35:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:40.060 23:35:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:40.060 23:35:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:40.060 23:35:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.060 23:35:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.060 23:35:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.060 23:35:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:40.060 23:35:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:40.060 23:35:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:40.060 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:21:42.595 23:35:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:42.595 23:35:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:42.595 23:35:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:42.595 23:35:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:42.595 23:35:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:42.595 23:35:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:42.595 23:35:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:42.595 23:35:03 -- nvmf/common.sh@294 -- # net_devs=() 00:21:42.595 23:35:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:42.595 23:35:03 -- nvmf/common.sh@295 -- # e810=() 00:21:42.595 23:35:03 -- nvmf/common.sh@295 -- # local -ga e810 00:21:42.595 23:35:03 -- nvmf/common.sh@296 -- # x722=() 00:21:42.595 23:35:03 -- nvmf/common.sh@296 -- # local -ga x722 00:21:42.595 23:35:03 -- nvmf/common.sh@297 -- # mlx=() 00:21:42.595 23:35:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:42.595 23:35:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.595 23:35:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.595 23:35:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.595 23:35:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.595 23:35:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.595 23:35:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.595 23:35:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.595 23:35:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.595 23:35:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.595 23:35:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.595 23:35:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.595 23:35:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:42.595 23:35:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:42.595 23:35:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:42.595 23:35:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:42.595 23:35:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:42.595 23:35:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:42.595 23:35:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:42.595 23:35:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:42.596 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:42.596 23:35:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:42.596 23:35:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:42.596 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:42.596 23:35:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:42.596 23:35:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:42.596 23:35:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.596 23:35:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:42.596 23:35:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.596 23:35:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:42.596 Found net devices under 0000:84:00.0: cvl_0_0 00:21:42.596 23:35:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.596 23:35:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:42.596 23:35:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.596 23:35:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:42.596 23:35:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.596 23:35:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:42.596 Found net devices under 0000:84:00.1: cvl_0_1 00:21:42.596 23:35:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.596 23:35:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:42.596 23:35:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:42.596 23:35:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:42.596 23:35:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.596 23:35:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.596 23:35:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.596 23:35:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:42.596 23:35:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.596 23:35:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.596 23:35:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:42.596 23:35:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.596 23:35:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.596 23:35:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:42.596 23:35:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:42.596 23:35:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.596 23:35:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.596 23:35:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.596 23:35:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.596 23:35:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:42.596 23:35:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.596 23:35:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.596 23:35:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.596 23:35:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:42.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:21:42.596 00:21:42.596 --- 10.0.0.2 ping statistics --- 00:21:42.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.596 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:21:42.596 23:35:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:21:42.596 00:21:42.596 --- 10.0.0.1 ping statistics --- 00:21:42.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.596 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:21:42.596 23:35:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.596 23:35:03 -- nvmf/common.sh@410 -- # return 0 00:21:42.596 23:35:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:42.596 23:35:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.596 23:35:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:42.596 23:35:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.596 23:35:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:42.596 23:35:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:42.596 23:35:03 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:42.596 23:35:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:42.596 23:35:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:42.596 23:35:03 -- common/autotest_common.sh@10 -- # set +x 00:21:42.596 23:35:03 -- nvmf/common.sh@469 -- # nvmfpid=279204 00:21:42.596 23:35:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:42.596 23:35:03 -- nvmf/common.sh@470 -- # waitforlisten 279204 00:21:42.596 23:35:03 -- common/autotest_common.sh@819 -- # '[' -z 279204 ']' 00:21:42.596 23:35:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.596 23:35:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:42.596 23:35:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.596 23:35:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:42.596 23:35:03 -- common/autotest_common.sh@10 -- # set +x 00:21:42.596 [2024-07-11 23:35:03.429040] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:42.596 [2024-07-11 23:35:03.429133] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:42.596 [2024-07-11 23:35:03.532483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.854 [2024-07-11 23:35:03.686309] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:42.854 [2024-07-11 23:35:03.686519] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.854 [2024-07-11 23:35:03.686564] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.854 [2024-07-11 23:35:03.686594] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.854 [2024-07-11 23:35:03.687006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:42.854 [2024-07-11 23:35:03.687103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:42.854 [2024-07-11 23:35:03.687194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:42.854 [2024-07-11 23:35:03.687201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.793 23:35:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:43.793 23:35:04 -- common/autotest_common.sh@852 -- # return 0 00:21:43.793 23:35:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:43.793 23:35:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:43.794 23:35:04 -- common/autotest_common.sh@10 -- # set +x 00:21:43.794 23:35:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.794 23:35:04 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:43.794 23:35:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.794 23:35:04 -- common/autotest_common.sh@10 -- # set +x 00:21:43.794 [2024-07-11 23:35:04.682859] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.794 23:35:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.794 23:35:04 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:43.794 23:35:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.794 23:35:04 -- common/autotest_common.sh@10 -- # set +x 00:21:43.794 Malloc0 00:21:43.794 23:35:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.794 23:35:04 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:43.794 23:35:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.794 23:35:04 -- common/autotest_common.sh@10 -- # set +x 00:21:43.794 23:35:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.794 23:35:04 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:43.794 23:35:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.794 23:35:04 -- common/autotest_common.sh@10 -- # set +x 00:21:43.794 23:35:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.794 23:35:04 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:43.794 23:35:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.794 23:35:04 -- common/autotest_common.sh@10 -- # set +x 00:21:43.794 [2024-07-11 23:35:04.740367] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.054 23:35:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.054 23:35:04 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:44.054 23:35:04 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:44.054 23:35:04 -- nvmf/common.sh@520 -- # config=() 00:21:44.054 23:35:04 -- nvmf/common.sh@520 -- # local subsystem config 00:21:44.054 23:35:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:44.054 23:35:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:44.054 { 00:21:44.054 "params": { 00:21:44.054 "name": "Nvme$subsystem", 00:21:44.054 "trtype": "$TEST_TRANSPORT", 00:21:44.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.054 "adrfam": "ipv4", 00:21:44.054 "trsvcid": "$NVMF_PORT", 00:21:44.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.054 "hdgst": ${hdgst:-false}, 00:21:44.054 "ddgst": ${ddgst:-false} 00:21:44.054 }, 00:21:44.054 "method": "bdev_nvme_attach_controller" 00:21:44.054 } 00:21:44.054 EOF 00:21:44.054 )") 00:21:44.054 23:35:04 -- nvmf/common.sh@542 -- # cat 00:21:44.054 23:35:04 -- nvmf/common.sh@544 -- # jq . 00:21:44.054 23:35:04 -- nvmf/common.sh@545 -- # IFS=, 00:21:44.054 23:35:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:44.054 "params": { 00:21:44.054 "name": "Nvme1", 00:21:44.054 "trtype": "tcp", 00:21:44.054 "traddr": "10.0.0.2", 00:21:44.054 "adrfam": "ipv4", 00:21:44.054 "trsvcid": "4420", 00:21:44.054 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.054 "hdgst": false, 00:21:44.054 "ddgst": false 00:21:44.054 }, 00:21:44.054 "method": "bdev_nvme_attach_controller" 00:21:44.054 }' 00:21:44.054 [2024-07-11 23:35:04.788995] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:44.054 [2024-07-11 23:35:04.789088] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid279365 ] 00:21:44.054 [2024-07-11 23:35:04.861099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:44.054 [2024-07-11 23:35:04.953222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.054 [2024-07-11 23:35:04.953277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.054 [2024-07-11 23:35:04.953281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.313 [2024-07-11 23:35:05.148625] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:44.313 [2024-07-11 23:35:05.148684] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:44.313 I/O targets: 00:21:44.313 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:44.313 00:21:44.313 00:21:44.313 CUnit - A unit testing framework for C - Version 2.1-3 00:21:44.313 http://cunit.sourceforge.net/ 00:21:44.313 00:21:44.313 00:21:44.313 Suite: bdevio tests on: Nvme1n1 00:21:44.313 Test: blockdev write read block ...passed 00:21:44.313 Test: blockdev write zeroes read block ...passed 00:21:44.313 Test: blockdev write zeroes read no split ...passed 00:21:44.571 Test: blockdev write zeroes read split ...passed 00:21:44.571 Test: blockdev write zeroes read split partial ...passed 00:21:44.571 Test: blockdev reset ...[2024-07-11 23:35:05.392801] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:44.571 [2024-07-11 23:35:05.392913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613fd0 (9): Bad file descriptor 00:21:44.571 [2024-07-11 23:35:05.448566] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:44.571 passed 00:21:44.571 Test: blockdev write read 8 blocks ...passed 00:21:44.571 Test: blockdev write read size > 128k ...passed 00:21:44.571 Test: blockdev write read invalid size ...passed 00:21:44.829 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:44.829 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:44.829 Test: blockdev write read max offset ...passed 00:21:44.829 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:44.829 Test: blockdev writev readv 8 blocks ...passed 00:21:44.829 Test: blockdev writev readv 30 x 1block ...passed 00:21:44.829 Test: blockdev writev readv block ...passed 00:21:44.829 Test: blockdev writev readv size > 128k ...passed 00:21:44.829 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:44.829 Test: blockdev comparev and writev ...[2024-07-11 23:35:05.710770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:44.829 [2024-07-11 23:35:05.710808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.829 [2024-07-11 23:35:05.710839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:44.829 [2024-07-11 23:35:05.710856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.829 [2024-07-11 23:35:05.711430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:44.829 [2024-07-11 23:35:05.711455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.829 [2024-07-11 23:35:05.711476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:44.829 [2024-07-11 23:35:05.711492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.829 [2024-07-11 23:35:05.712043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:44.829 [2024-07-11 23:35:05.712066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.829 [2024-07-11 23:35:05.712087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:44.829 [2024-07-11 23:35:05.712103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.829 [2024-07-11 23:35:05.712662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:44.829 [2024-07-11 23:35:05.712686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.829 [2024-07-11 23:35:05.712707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:44.829 [2024-07-11 23:35:05.712723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.829 passed 00:21:45.088 Test: blockdev nvme passthru rw ...passed 00:21:45.088 Test: blockdev nvme passthru vendor specific ...[2024-07-11 23:35:05.797581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:45.088 [2024-07-11 23:35:05.797609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:45.088 [2024-07-11 23:35:05.797882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:45.088 [2024-07-11 23:35:05.797903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:45.088 [2024-07-11 23:35:05.798236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:45.088 [2024-07-11 23:35:05.798259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:45.088 [2024-07-11 23:35:05.798593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:45.088 [2024-07-11 23:35:05.798615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:45.088 passed 00:21:45.088 Test: blockdev nvme admin passthru ...passed 00:21:45.088 Test: blockdev copy ...passed 00:21:45.088 00:21:45.088 Run Summary: Type Total Ran Passed Failed Inactive 00:21:45.088 suites 1 1 n/a 0 0 00:21:45.088 tests 23 23 23 0 0 00:21:45.089 asserts 152 152 152 0 n/a 00:21:45.089 00:21:45.089 Elapsed time = 1.412 seconds 00:21:45.353 23:35:06 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.353 23:35:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.353 23:35:06 -- common/autotest_common.sh@10 -- # set +x 00:21:45.353 23:35:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.353 23:35:06 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:45.353 23:35:06 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:45.353 23:35:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:45.353 23:35:06 -- nvmf/common.sh@116 -- # sync 00:21:45.353 23:35:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:45.353 23:35:06 -- nvmf/common.sh@119 -- # set +e 00:21:45.353 23:35:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:45.353 23:35:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:45.353 rmmod nvme_tcp 00:21:45.353 rmmod nvme_fabrics 00:21:45.353 rmmod nvme_keyring 00:21:45.353 23:35:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:45.353 23:35:06 -- nvmf/common.sh@123 -- # set -e 00:21:45.353 23:35:06 -- nvmf/common.sh@124 -- # return 0 00:21:45.353 23:35:06 -- nvmf/common.sh@477 -- # '[' -n 279204 ']' 00:21:45.353 23:35:06 -- nvmf/common.sh@478 -- # killprocess 279204 00:21:45.353 23:35:06 -- common/autotest_common.sh@926 -- # '[' -z 279204 ']' 00:21:45.353 23:35:06 -- common/autotest_common.sh@930 -- # kill -0 279204 00:21:45.353 23:35:06 -- common/autotest_common.sh@931 -- # uname 00:21:45.353 23:35:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:45.353 23:35:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 279204 00:21:45.659 23:35:06 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:21:45.659 23:35:06 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:21:45.659 23:35:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 279204' 00:21:45.659 killing process with pid 279204 00:21:45.659 23:35:06 -- common/autotest_common.sh@945 -- # kill 279204 00:21:45.659 23:35:06 -- common/autotest_common.sh@950 -- # wait 279204 00:21:46.227 23:35:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:46.227 23:35:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:46.227 23:35:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:46.227 23:35:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.227 23:35:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:46.227 23:35:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.227 23:35:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.227 23:35:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.132 23:35:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:48.132 00:21:48.132 real 0m8.511s 00:21:48.132 user 0m15.780s 00:21:48.132 sys 0m3.508s 00:21:48.132 23:35:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.132 23:35:08 -- common/autotest_common.sh@10 -- # set +x 00:21:48.132 ************************************ 00:21:48.132 END TEST nvmf_bdevio_no_huge 00:21:48.132 ************************************ 00:21:48.132 23:35:09 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:48.132 23:35:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:48.132 23:35:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:48.132 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:21:48.132 ************************************ 00:21:48.132 START TEST nvmf_tls 00:21:48.132 ************************************ 00:21:48.132 23:35:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:48.392 * Looking for test storage... 00:21:48.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:48.392 23:35:09 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.392 23:35:09 -- nvmf/common.sh@7 -- # uname -s 00:21:48.392 23:35:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.392 23:35:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.392 23:35:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.392 23:35:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.392 23:35:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.392 23:35:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.392 23:35:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.392 23:35:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.392 23:35:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.392 23:35:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.392 23:35:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:48.392 23:35:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:48.392 23:35:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.392 23:35:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.392 23:35:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.392 23:35:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.392 23:35:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.392 23:35:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.392 23:35:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.392 23:35:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.392 23:35:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.392 23:35:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.392 23:35:09 -- paths/export.sh@5 -- # export PATH 00:21:48.392 23:35:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.392 23:35:09 -- nvmf/common.sh@46 -- # : 0 00:21:48.392 23:35:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:48.392 23:35:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:48.392 23:35:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:48.392 23:35:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.392 23:35:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.392 23:35:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:48.392 23:35:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:48.392 23:35:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:48.392 23:35:09 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:48.392 23:35:09 -- target/tls.sh@71 -- # nvmftestinit 00:21:48.392 23:35:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:48.392 23:35:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.392 23:35:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:48.392 23:35:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:48.392 23:35:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:48.392 23:35:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.392 23:35:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.392 23:35:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.392 23:35:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:48.392 23:35:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:48.392 23:35:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:48.392 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:21:50.928 23:35:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:50.928 23:35:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:50.928 23:35:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:50.928 23:35:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:50.928 23:35:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:50.928 23:35:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:50.928 23:35:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:50.928 23:35:11 -- nvmf/common.sh@294 -- # net_devs=() 00:21:50.928 23:35:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:50.928 23:35:11 -- nvmf/common.sh@295 -- # e810=() 00:21:50.928 23:35:11 -- nvmf/common.sh@295 -- # local -ga e810 00:21:50.928 23:35:11 -- nvmf/common.sh@296 -- # x722=() 00:21:50.928 23:35:11 -- nvmf/common.sh@296 -- # local -ga x722 00:21:50.928 23:35:11 -- nvmf/common.sh@297 -- # mlx=() 00:21:50.928 23:35:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:50.928 23:35:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.928 23:35:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.928 23:35:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.928 23:35:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.928 23:35:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.928 23:35:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.928 23:35:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.928 23:35:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.928 23:35:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.928 23:35:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.928 23:35:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.928 23:35:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:50.928 23:35:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:50.928 23:35:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:50.928 23:35:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:50.928 23:35:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:50.928 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:50.928 23:35:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:50.928 23:35:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:50.928 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:50.928 23:35:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:50.928 23:35:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:50.928 23:35:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.928 23:35:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:50.928 23:35:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.928 23:35:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:50.928 Found net devices under 0000:84:00.0: cvl_0_0 00:21:50.928 23:35:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.928 23:35:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:50.928 23:35:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.928 23:35:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:50.928 23:35:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.928 23:35:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:50.928 Found net devices under 0000:84:00.1: cvl_0_1 00:21:50.928 23:35:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.928 23:35:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:50.928 23:35:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:50.928 23:35:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:50.928 23:35:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:50.928 23:35:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.928 23:35:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.928 23:35:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.928 23:35:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:50.928 23:35:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.928 23:35:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.928 23:35:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:50.928 23:35:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.928 23:35:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.928 23:35:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:50.928 23:35:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:50.928 23:35:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.928 23:35:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.928 23:35:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.928 23:35:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.928 23:35:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:50.928 23:35:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.188 23:35:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.188 23:35:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.188 23:35:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:51.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:21:51.188 00:21:51.188 --- 10.0.0.2 ping statistics --- 00:21:51.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.188 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:21:51.188 23:35:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:51.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:21:51.188 00:21:51.188 --- 10.0.0.1 ping statistics --- 00:21:51.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.188 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:21:51.188 23:35:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.188 23:35:11 -- nvmf/common.sh@410 -- # return 0 00:21:51.188 23:35:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:51.188 23:35:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.188 23:35:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:51.188 23:35:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:51.188 23:35:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.188 23:35:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:51.188 23:35:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:51.188 23:35:11 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:51.188 23:35:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:51.188 23:35:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:51.188 23:35:11 -- common/autotest_common.sh@10 -- # set +x 00:21:51.188 23:35:11 -- nvmf/common.sh@469 -- # nvmfpid=281596 00:21:51.188 23:35:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:51.188 23:35:11 -- nvmf/common.sh@470 -- # waitforlisten 281596 00:21:51.188 23:35:11 -- common/autotest_common.sh@819 -- # '[' -z 281596 ']' 00:21:51.188 23:35:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.188 23:35:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:51.188 23:35:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.188 23:35:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:51.188 23:35:11 -- common/autotest_common.sh@10 -- # set +x 00:21:51.189 [2024-07-11 23:35:12.055781] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:51.189 [2024-07-11 23:35:12.055956] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.189 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.447 [2024-07-11 23:35:12.175091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.447 [2024-07-11 23:35:12.282834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:51.447 [2024-07-11 23:35:12.283032] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.447 [2024-07-11 23:35:12.283057] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.447 [2024-07-11 23:35:12.283075] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.447 [2024-07-11 23:35:12.283113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.447 23:35:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:51.447 23:35:12 -- common/autotest_common.sh@852 -- # return 0 00:21:51.447 23:35:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:51.447 23:35:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:51.447 23:35:12 -- common/autotest_common.sh@10 -- # set +x 00:21:51.447 23:35:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.447 23:35:12 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:21:51.447 23:35:12 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:52.014 true 00:21:52.014 23:35:12 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:52.014 23:35:12 -- target/tls.sh@82 -- # jq -r .tls_version 00:21:52.278 23:35:13 -- target/tls.sh@82 -- # version=0 00:21:52.278 23:35:13 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:21:52.278 23:35:13 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:52.537 23:35:13 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:52.537 23:35:13 -- target/tls.sh@90 -- # jq -r .tls_version 00:21:52.794 23:35:13 -- target/tls.sh@90 -- # version=13 00:21:52.794 23:35:13 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:21:52.794 23:35:13 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:53.051 23:35:13 -- target/tls.sh@98 -- # jq -r .tls_version 00:21:53.051 23:35:13 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:53.620 23:35:14 -- target/tls.sh@98 -- # version=7 00:21:53.620 23:35:14 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:21:53.620 23:35:14 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:53.620 23:35:14 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:53.878 23:35:14 -- target/tls.sh@105 -- # ktls=false 00:21:53.878 23:35:14 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:21:53.878 23:35:14 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:54.137 23:35:14 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:54.137 23:35:14 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:54.396 23:35:15 -- target/tls.sh@113 -- # ktls=true 00:21:54.396 23:35:15 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:21:54.396 23:35:15 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:54.654 23:35:15 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:54.654 23:35:15 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:21:54.913 23:35:15 -- target/tls.sh@121 -- # ktls=false 00:21:54.913 23:35:15 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:21:54.913 23:35:15 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:21:54.913 23:35:15 -- target/tls.sh@49 -- # local key hash crc 00:21:54.913 23:35:15 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:21:54.913 23:35:15 -- target/tls.sh@51 -- # hash=01 00:21:54.913 23:35:15 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:21:54.913 23:35:15 -- target/tls.sh@52 -- # gzip -1 -c 00:21:54.913 23:35:15 -- target/tls.sh@52 -- # tail -c8 00:21:54.913 23:35:15 -- target/tls.sh@52 -- # head -c 4 00:21:54.913 23:35:15 -- target/tls.sh@52 -- # crc='p$H�' 00:21:54.913 23:35:15 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:54.913 23:35:15 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:21:54.913 23:35:15 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:54.913 23:35:15 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:54.913 23:35:15 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:21:54.913 23:35:15 -- target/tls.sh@49 -- # local key hash crc 00:21:54.913 23:35:15 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:21:54.913 23:35:15 -- target/tls.sh@51 -- # hash=01 00:21:54.913 23:35:15 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:21:54.913 23:35:15 -- target/tls.sh@52 -- # gzip -1 -c 00:21:54.913 23:35:15 -- target/tls.sh@52 -- # tail -c8 00:21:54.913 23:35:15 -- target/tls.sh@52 -- # head -c 4 00:21:55.171 23:35:15 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:21:55.171 23:35:15 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:55.171 23:35:15 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:21:55.171 23:35:15 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:55.171 23:35:15 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:55.171 23:35:15 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:55.171 23:35:15 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:55.171 23:35:15 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:55.171 23:35:15 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:55.171 23:35:15 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:55.171 23:35:15 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:55.171 23:35:15 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:55.430 23:35:16 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:55.688 23:35:16 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:55.689 23:35:16 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:55.689 23:35:16 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:56.255 [2024-07-11 23:35:17.160838] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.255 23:35:17 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:56.821 23:35:17 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:57.389 [2024-07-11 23:35:18.111492] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:57.389 [2024-07-11 23:35:18.111801] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.389 23:35:18 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:57.647 malloc0 00:21:57.647 23:35:18 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:57.906 23:35:18 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:58.474 23:35:19 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:58.474 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.453 Initializing NVMe Controllers 00:22:08.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:08.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:08.453 Initialization complete. Launching workers. 00:22:08.453 ======================================================== 00:22:08.453 Latency(us) 00:22:08.453 Device Information : IOPS MiB/s Average min max 00:22:08.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7647.46 29.87 8371.47 1383.42 9452.08 00:22:08.453 ======================================================== 00:22:08.453 Total : 7647.46 29.87 8371.47 1383.42 9452.08 00:22:08.453 00:22:08.453 23:35:29 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:08.453 23:35:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:08.453 23:35:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:08.453 23:35:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:08.453 23:35:29 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:22:08.453 23:35:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:08.453 23:35:29 -- target/tls.sh@28 -- # bdevperf_pid=283690 00:22:08.453 23:35:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.453 23:35:29 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:08.453 23:35:29 -- target/tls.sh@31 -- # waitforlisten 283690 /var/tmp/bdevperf.sock 00:22:08.453 23:35:29 -- common/autotest_common.sh@819 -- # '[' -z 283690 ']' 00:22:08.453 23:35:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.453 23:35:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:08.453 23:35:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.453 23:35:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:08.453 23:35:29 -- common/autotest_common.sh@10 -- # set +x 00:22:08.453 [2024-07-11 23:35:29.365389] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:08.453 [2024-07-11 23:35:29.365504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283690 ] 00:22:08.712 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.712 [2024-07-11 23:35:29.444852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.712 [2024-07-11 23:35:29.539396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.649 23:35:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:09.649 23:35:30 -- common/autotest_common.sh@852 -- # return 0 00:22:09.649 23:35:30 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:09.908 [2024-07-11 23:35:30.619495] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:09.908 TLSTESTn1 00:22:09.908 23:35:30 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:09.908 Running I/O for 10 seconds... 00:22:22.179 00:22:22.179 Latency(us) 00:22:22.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.179 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:22.179 Verification LBA range: start 0x0 length 0x2000 00:22:22.179 TLSTESTn1 : 10.06 1384.44 5.41 0.00 0.00 92239.24 8155.59 164665.27 00:22:22.179 =================================================================================================================== 00:22:22.179 Total : 1384.44 5.41 0.00 0.00 92239.24 8155.59 164665.27 00:22:22.179 0 00:22:22.179 23:35:40 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.179 23:35:40 -- target/tls.sh@45 -- # killprocess 283690 00:22:22.179 23:35:40 -- common/autotest_common.sh@926 -- # '[' -z 283690 ']' 00:22:22.179 23:35:40 -- common/autotest_common.sh@930 -- # kill -0 283690 00:22:22.179 23:35:40 -- common/autotest_common.sh@931 -- # uname 00:22:22.179 23:35:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:22.179 23:35:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 283690 00:22:22.179 23:35:40 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:22.179 23:35:40 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:22.179 23:35:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 283690' 00:22:22.179 killing process with pid 283690 00:22:22.179 23:35:40 -- common/autotest_common.sh@945 -- # kill 283690 00:22:22.179 Received shutdown signal, test time was about 10.000000 seconds 00:22:22.179 00:22:22.179 Latency(us) 00:22:22.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.179 =================================================================================================================== 00:22:22.179 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:22.179 23:35:40 -- common/autotest_common.sh@950 -- # wait 283690 00:22:22.179 23:35:41 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:22.179 23:35:41 -- common/autotest_common.sh@640 -- # local es=0 00:22:22.179 23:35:41 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:22.179 23:35:41 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:22.179 23:35:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:22.179 23:35:41 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:22.179 23:35:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:22.179 23:35:41 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:22.179 23:35:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:22.179 23:35:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:22.179 23:35:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:22.179 23:35:41 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:22:22.179 23:35:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.179 23:35:41 -- target/tls.sh@28 -- # bdevperf_pid=285048 00:22:22.179 23:35:41 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:22.179 23:35:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:22.179 23:35:41 -- target/tls.sh@31 -- # waitforlisten 285048 /var/tmp/bdevperf.sock 00:22:22.179 23:35:41 -- common/autotest_common.sh@819 -- # '[' -z 285048 ']' 00:22:22.179 23:35:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.179 23:35:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:22.179 23:35:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.180 23:35:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:22.180 23:35:41 -- common/autotest_common.sh@10 -- # set +x 00:22:22.180 [2024-07-11 23:35:41.258554] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:22.180 [2024-07-11 23:35:41.258662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285048 ] 00:22:22.180 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.180 [2024-07-11 23:35:41.334235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.180 [2024-07-11 23:35:41.425406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.180 23:35:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:22.180 23:35:41 -- common/autotest_common.sh@852 -- # return 0 00:22:22.180 23:35:41 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:22.180 [2024-07-11 23:35:42.071439] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.180 [2024-07-11 23:35:42.083414] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:22.180 [2024-07-11 23:35:42.084419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2065000 (107): Transport endpoint is not connected 00:22:22.180 [2024-07-11 23:35:42.085408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2065000 (9): Bad file descriptor 00:22:22.180 [2024-07-11 23:35:42.086408] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:22.180 [2024-07-11 23:35:42.086442] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:22.180 [2024-07-11 23:35:42.086457] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:22.180 request: 00:22:22.180 { 00:22:22.180 "name": "TLSTEST", 00:22:22.180 "trtype": "tcp", 00:22:22.180 "traddr": "10.0.0.2", 00:22:22.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:22.180 "adrfam": "ipv4", 00:22:22.180 "trsvcid": "4420", 00:22:22.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.180 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:22:22.180 "method": "bdev_nvme_attach_controller", 00:22:22.180 "req_id": 1 00:22:22.180 } 00:22:22.180 Got JSON-RPC error response 00:22:22.180 response: 00:22:22.180 { 00:22:22.180 "code": -32602, 00:22:22.180 "message": "Invalid parameters" 00:22:22.180 } 00:22:22.180 23:35:42 -- target/tls.sh@36 -- # killprocess 285048 00:22:22.180 23:35:42 -- common/autotest_common.sh@926 -- # '[' -z 285048 ']' 00:22:22.180 23:35:42 -- common/autotest_common.sh@930 -- # kill -0 285048 00:22:22.180 23:35:42 -- common/autotest_common.sh@931 -- # uname 00:22:22.180 23:35:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:22.180 23:35:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 285048 00:22:22.180 23:35:42 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:22.180 23:35:42 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:22.180 23:35:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 285048' 00:22:22.180 killing process with pid 285048 00:22:22.180 23:35:42 -- common/autotest_common.sh@945 -- # kill 285048 00:22:22.180 Received shutdown signal, test time was about 10.000000 seconds 00:22:22.180 00:22:22.180 Latency(us) 00:22:22.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.180 =================================================================================================================== 00:22:22.180 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:22.180 23:35:42 -- common/autotest_common.sh@950 -- # wait 285048 00:22:22.180 23:35:42 -- target/tls.sh@37 -- # return 1 00:22:22.180 23:35:42 -- common/autotest_common.sh@643 -- # es=1 00:22:22.180 23:35:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:22.180 23:35:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:22.180 23:35:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:22.180 23:35:42 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:22.180 23:35:42 -- common/autotest_common.sh@640 -- # local es=0 00:22:22.180 23:35:42 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:22.180 23:35:42 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:22.180 23:35:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:22.180 23:35:42 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:22.180 23:35:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:22.180 23:35:42 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:22.180 23:35:42 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:22.180 23:35:42 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:22.180 23:35:42 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:22.180 23:35:42 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:22:22.180 23:35:42 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.180 23:35:42 -- target/tls.sh@28 -- # bdevperf_pid=285192 00:22:22.180 23:35:42 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:22.180 23:35:42 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:22.180 23:35:42 -- target/tls.sh@31 -- # waitforlisten 285192 /var/tmp/bdevperf.sock 00:22:22.180 23:35:42 -- common/autotest_common.sh@819 -- # '[' -z 285192 ']' 00:22:22.180 23:35:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.180 23:35:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:22.180 23:35:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.180 23:35:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:22.180 23:35:42 -- common/autotest_common.sh@10 -- # set +x 00:22:22.180 [2024-07-11 23:35:42.409670] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:22.180 [2024-07-11 23:35:42.409767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285192 ] 00:22:22.180 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.180 [2024-07-11 23:35:42.485087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.180 [2024-07-11 23:35:42.577482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.120 23:35:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:23.120 23:35:43 -- common/autotest_common.sh@852 -- # return 0 00:22:23.120 23:35:43 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:23.120 [2024-07-11 23:35:44.015069] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:23.120 [2024-07-11 23:35:44.020219] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:23.120 [2024-07-11 23:35:44.020261] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:23.120 [2024-07-11 23:35:44.020302] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:23.120 [2024-07-11 23:35:44.020846] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a5000 (107): Transport endpoint is not connected 00:22:23.120 [2024-07-11 23:35:44.021835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a5000 (9): Bad file descriptor 00:22:23.120 [2024-07-11 23:35:44.022834] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:23.120 [2024-07-11 23:35:44.022854] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:23.120 [2024-07-11 23:35:44.022869] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:23.120 request: 00:22:23.120 { 00:22:23.120 "name": "TLSTEST", 00:22:23.120 "trtype": "tcp", 00:22:23.120 "traddr": "10.0.0.2", 00:22:23.120 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:23.120 "adrfam": "ipv4", 00:22:23.120 "trsvcid": "4420", 00:22:23.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.120 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:22:23.120 "method": "bdev_nvme_attach_controller", 00:22:23.120 "req_id": 1 00:22:23.120 } 00:22:23.120 Got JSON-RPC error response 00:22:23.120 response: 00:22:23.120 { 00:22:23.120 "code": -32602, 00:22:23.120 "message": "Invalid parameters" 00:22:23.120 } 00:22:23.120 23:35:44 -- target/tls.sh@36 -- # killprocess 285192 00:22:23.120 23:35:44 -- common/autotest_common.sh@926 -- # '[' -z 285192 ']' 00:22:23.121 23:35:44 -- common/autotest_common.sh@930 -- # kill -0 285192 00:22:23.121 23:35:44 -- common/autotest_common.sh@931 -- # uname 00:22:23.121 23:35:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:23.121 23:35:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 285192 00:22:23.379 23:35:44 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:23.379 23:35:44 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:23.379 23:35:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 285192' 00:22:23.379 killing process with pid 285192 00:22:23.379 23:35:44 -- common/autotest_common.sh@945 -- # kill 285192 00:22:23.379 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.379 00:22:23.379 Latency(us) 00:22:23.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.379 =================================================================================================================== 00:22:23.379 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:23.379 23:35:44 -- common/autotest_common.sh@950 -- # wait 285192 00:22:23.379 23:35:44 -- target/tls.sh@37 -- # return 1 00:22:23.379 23:35:44 -- common/autotest_common.sh@643 -- # es=1 00:22:23.379 23:35:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:23.379 23:35:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:23.379 23:35:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:23.379 23:35:44 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:23.379 23:35:44 -- common/autotest_common.sh@640 -- # local es=0 00:22:23.379 23:35:44 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:23.379 23:35:44 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:23.379 23:35:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:23.379 23:35:44 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:23.379 23:35:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:23.379 23:35:44 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:23.379 23:35:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:23.379 23:35:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:23.379 23:35:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:23.379 23:35:44 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:22:23.379 23:35:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:23.379 23:35:44 -- target/tls.sh@28 -- # bdevperf_pid=285470 00:22:23.379 23:35:44 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:23.379 23:35:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:23.379 23:35:44 -- target/tls.sh@31 -- # waitforlisten 285470 /var/tmp/bdevperf.sock 00:22:23.379 23:35:44 -- common/autotest_common.sh@819 -- # '[' -z 285470 ']' 00:22:23.379 23:35:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.379 23:35:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:23.379 23:35:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.379 23:35:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:23.379 23:35:44 -- common/autotest_common.sh@10 -- # set +x 00:22:23.639 [2024-07-11 23:35:44.333924] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:23.639 [2024-07-11 23:35:44.334003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285470 ] 00:22:23.639 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.639 [2024-07-11 23:35:44.398699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.639 [2024-07-11 23:35:44.486835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.574 23:35:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:24.575 23:35:45 -- common/autotest_common.sh@852 -- # return 0 00:22:24.575 23:35:45 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:24.832 [2024-07-11 23:35:45.719725] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.832 [2024-07-11 23:35:45.724905] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:24.832 [2024-07-11 23:35:45.724935] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:24.832 [2024-07-11 23:35:45.724972] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:24.832 [2024-07-11 23:35:45.725588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc3000 (107): Transport endpoint is not connected 00:22:24.832 [2024-07-11 23:35:45.726576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc3000 (9): Bad file descriptor 00:22:24.832 [2024-07-11 23:35:45.727574] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:24.832 [2024-07-11 23:35:45.727596] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:24.832 [2024-07-11 23:35:45.727609] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:24.832 request: 00:22:24.832 { 00:22:24.832 "name": "TLSTEST", 00:22:24.832 "trtype": "tcp", 00:22:24.832 "traddr": "10.0.0.2", 00:22:24.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.832 "adrfam": "ipv4", 00:22:24.832 "trsvcid": "4420", 00:22:24.832 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:24.832 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:22:24.832 "method": "bdev_nvme_attach_controller", 00:22:24.832 "req_id": 1 00:22:24.832 } 00:22:24.832 Got JSON-RPC error response 00:22:24.832 response: 00:22:24.832 { 00:22:24.832 "code": -32602, 00:22:24.832 "message": "Invalid parameters" 00:22:24.832 } 00:22:24.832 23:35:45 -- target/tls.sh@36 -- # killprocess 285470 00:22:24.832 23:35:45 -- common/autotest_common.sh@926 -- # '[' -z 285470 ']' 00:22:24.832 23:35:45 -- common/autotest_common.sh@930 -- # kill -0 285470 00:22:24.832 23:35:45 -- common/autotest_common.sh@931 -- # uname 00:22:24.832 23:35:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:24.832 23:35:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 285470 00:22:24.832 23:35:45 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:24.832 23:35:45 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:24.832 23:35:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 285470' 00:22:24.832 killing process with pid 285470 00:22:24.832 23:35:45 -- common/autotest_common.sh@945 -- # kill 285470 00:22:24.832 Received shutdown signal, test time was about 10.000000 seconds 00:22:24.832 00:22:24.832 Latency(us) 00:22:24.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.832 =================================================================================================================== 00:22:24.832 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:24.832 23:35:45 -- common/autotest_common.sh@950 -- # wait 285470 00:22:25.090 23:35:45 -- target/tls.sh@37 -- # return 1 00:22:25.090 23:35:45 -- common/autotest_common.sh@643 -- # es=1 00:22:25.090 23:35:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:25.090 23:35:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:25.090 23:35:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:25.090 23:35:45 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:25.090 23:35:45 -- common/autotest_common.sh@640 -- # local es=0 00:22:25.090 23:35:45 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:25.090 23:35:45 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:25.090 23:35:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:25.090 23:35:45 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:25.090 23:35:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:25.090 23:35:45 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:25.090 23:35:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:25.090 23:35:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:25.090 23:35:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:25.090 23:35:45 -- target/tls.sh@23 -- # psk= 00:22:25.090 23:35:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:25.090 23:35:45 -- target/tls.sh@28 -- # bdevperf_pid=285624 00:22:25.090 23:35:45 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:25.090 23:35:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:25.090 23:35:45 -- target/tls.sh@31 -- # waitforlisten 285624 /var/tmp/bdevperf.sock 00:22:25.090 23:35:45 -- common/autotest_common.sh@819 -- # '[' -z 285624 ']' 00:22:25.090 23:35:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.090 23:35:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:25.090 23:35:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.091 23:35:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:25.091 23:35:45 -- common/autotest_common.sh@10 -- # set +x 00:22:25.091 [2024-07-11 23:35:46.023240] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:25.091 [2024-07-11 23:35:46.023329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285624 ] 00:22:25.350 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.350 [2024-07-11 23:35:46.094199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.350 [2024-07-11 23:35:46.187262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.609 23:35:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:25.609 23:35:46 -- common/autotest_common.sh@852 -- # return 0 00:22:25.609 23:35:46 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:25.869 [2024-07-11 23:35:46.745498] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:25.869 [2024-07-11 23:35:46.747777] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a856d0 (9): Bad file descriptor 00:22:25.869 [2024-07-11 23:35:46.748772] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:25.869 [2024-07-11 23:35:46.748794] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:25.869 [2024-07-11 23:35:46.748809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:25.869 request: 00:22:25.869 { 00:22:25.869 "name": "TLSTEST", 00:22:25.869 "trtype": "tcp", 00:22:25.869 "traddr": "10.0.0.2", 00:22:25.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.869 "adrfam": "ipv4", 00:22:25.869 "trsvcid": "4420", 00:22:25.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.869 "method": "bdev_nvme_attach_controller", 00:22:25.869 "req_id": 1 00:22:25.869 } 00:22:25.869 Got JSON-RPC error response 00:22:25.869 response: 00:22:25.869 { 00:22:25.869 "code": -32602, 00:22:25.869 "message": "Invalid parameters" 00:22:25.869 } 00:22:25.869 23:35:46 -- target/tls.sh@36 -- # killprocess 285624 00:22:25.869 23:35:46 -- common/autotest_common.sh@926 -- # '[' -z 285624 ']' 00:22:25.869 23:35:46 -- common/autotest_common.sh@930 -- # kill -0 285624 00:22:25.869 23:35:46 -- common/autotest_common.sh@931 -- # uname 00:22:25.869 23:35:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:25.869 23:35:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 285624 00:22:25.869 23:35:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:25.869 23:35:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:25.869 23:35:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 285624' 00:22:25.869 killing process with pid 285624 00:22:25.869 23:35:46 -- common/autotest_common.sh@945 -- # kill 285624 00:22:25.869 Received shutdown signal, test time was about 10.000000 seconds 00:22:25.869 00:22:25.869 Latency(us) 00:22:25.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.869 =================================================================================================================== 00:22:25.869 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:26.128 23:35:46 -- common/autotest_common.sh@950 -- # wait 285624 00:22:26.128 23:35:47 -- target/tls.sh@37 -- # return 1 00:22:26.128 23:35:47 -- common/autotest_common.sh@643 -- # es=1 00:22:26.128 23:35:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:26.128 23:35:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:26.128 23:35:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:26.128 23:35:47 -- target/tls.sh@167 -- # killprocess 281596 00:22:26.128 23:35:47 -- common/autotest_common.sh@926 -- # '[' -z 281596 ']' 00:22:26.128 23:35:47 -- common/autotest_common.sh@930 -- # kill -0 281596 00:22:26.128 23:35:47 -- common/autotest_common.sh@931 -- # uname 00:22:26.128 23:35:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:26.128 23:35:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 281596 00:22:26.128 23:35:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:26.128 23:35:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:26.128 23:35:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 281596' 00:22:26.128 killing process with pid 281596 00:22:26.128 23:35:47 -- common/autotest_common.sh@945 -- # kill 281596 00:22:26.128 23:35:47 -- common/autotest_common.sh@950 -- # wait 281596 00:22:26.387 23:35:47 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:22:26.387 23:35:47 -- target/tls.sh@49 -- # local key hash crc 00:22:26.387 23:35:47 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:26.387 23:35:47 -- target/tls.sh@51 -- # hash=02 00:22:26.387 23:35:47 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:22:26.387 23:35:47 -- target/tls.sh@52 -- # gzip -1 -c 00:22:26.387 23:35:47 -- target/tls.sh@52 -- # tail -c8 00:22:26.387 23:35:47 -- target/tls.sh@52 -- # head -c 4 00:22:26.387 23:35:47 -- target/tls.sh@52 -- # crc='�e�'\''' 00:22:26.387 23:35:47 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:22:26.387 23:35:47 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:22:26.387 23:35:47 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:26.387 23:35:47 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:26.387 23:35:47 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:26.387 23:35:47 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:26.387 23:35:47 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:26.387 23:35:47 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:22:26.387 23:35:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:26.387 23:35:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:26.387 23:35:47 -- common/autotest_common.sh@10 -- # set +x 00:22:26.645 23:35:47 -- nvmf/common.sh@469 -- # nvmfpid=285788 00:22:26.645 23:35:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:26.645 23:35:47 -- nvmf/common.sh@470 -- # waitforlisten 285788 00:22:26.645 23:35:47 -- common/autotest_common.sh@819 -- # '[' -z 285788 ']' 00:22:26.645 23:35:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.645 23:35:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:26.645 23:35:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.645 23:35:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:26.645 23:35:47 -- common/autotest_common.sh@10 -- # set +x 00:22:26.645 [2024-07-11 23:35:47.415949] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:26.645 [2024-07-11 23:35:47.416117] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.645 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.645 [2024-07-11 23:35:47.528717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.905 [2024-07-11 23:35:47.632409] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:26.905 [2024-07-11 23:35:47.632567] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.906 [2024-07-11 23:35:47.632587] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.906 [2024-07-11 23:35:47.632600] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.906 [2024-07-11 23:35:47.632635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.846 23:35:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:27.846 23:35:48 -- common/autotest_common.sh@852 -- # return 0 00:22:27.846 23:35:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:27.846 23:35:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:27.846 23:35:48 -- common/autotest_common.sh@10 -- # set +x 00:22:27.846 23:35:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.846 23:35:48 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:27.846 23:35:48 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:27.846 23:35:48 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:28.105 [2024-07-11 23:35:49.027893] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.105 23:35:49 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:28.671 23:35:49 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:28.929 [2024-07-11 23:35:49.661667] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:28.929 [2024-07-11 23:35:49.661964] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.929 23:35:49 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:29.188 malloc0 00:22:29.188 23:35:50 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:29.446 23:35:50 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:29.705 23:35:50 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:29.705 23:35:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:29.705 23:35:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:29.705 23:35:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:29.705 23:35:50 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:22:29.705 23:35:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:29.705 23:35:50 -- target/tls.sh@28 -- # bdevperf_pid=286218 00:22:29.705 23:35:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:29.705 23:35:50 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:29.705 23:35:50 -- target/tls.sh@31 -- # waitforlisten 286218 /var/tmp/bdevperf.sock 00:22:29.705 23:35:50 -- common/autotest_common.sh@819 -- # '[' -z 286218 ']' 00:22:29.705 23:35:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.705 23:35:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:29.705 23:35:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.705 23:35:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:29.705 23:35:50 -- common/autotest_common.sh@10 -- # set +x 00:22:29.963 [2024-07-11 23:35:50.693578] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:29.963 [2024-07-11 23:35:50.693661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286218 ] 00:22:29.963 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.963 [2024-07-11 23:35:50.762342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.963 [2024-07-11 23:35:50.847894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.898 23:35:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:30.898 23:35:51 -- common/autotest_common.sh@852 -- # return 0 00:22:30.898 23:35:51 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:31.156 [2024-07-11 23:35:52.060259] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:31.414 TLSTESTn1 00:22:31.414 23:35:52 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:31.414 Running I/O for 10 seconds... 00:22:41.384 00:22:41.384 Latency(us) 00:22:41.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.384 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:41.384 Verification LBA range: start 0x0 length 0x2000 00:22:41.384 TLSTESTn1 : 10.03 2041.14 7.97 0.00 0.00 62632.33 8349.77 64468.01 00:22:41.384 =================================================================================================================== 00:22:41.384 Total : 2041.14 7.97 0.00 0.00 62632.33 8349.77 64468.01 00:22:41.384 0 00:22:41.384 23:36:02 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.384 23:36:02 -- target/tls.sh@45 -- # killprocess 286218 00:22:41.384 23:36:02 -- common/autotest_common.sh@926 -- # '[' -z 286218 ']' 00:22:41.384 23:36:02 -- common/autotest_common.sh@930 -- # kill -0 286218 00:22:41.384 23:36:02 -- common/autotest_common.sh@931 -- # uname 00:22:41.384 23:36:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:41.384 23:36:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 286218 00:22:41.643 23:36:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:41.643 23:36:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:41.643 23:36:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 286218' 00:22:41.643 killing process with pid 286218 00:22:41.643 23:36:02 -- common/autotest_common.sh@945 -- # kill 286218 00:22:41.643 Received shutdown signal, test time was about 10.000000 seconds 00:22:41.643 00:22:41.643 Latency(us) 00:22:41.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.643 =================================================================================================================== 00:22:41.643 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.643 23:36:02 -- common/autotest_common.sh@950 -- # wait 286218 00:22:41.643 23:36:02 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:41.643 23:36:02 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:41.643 23:36:02 -- common/autotest_common.sh@640 -- # local es=0 00:22:41.643 23:36:02 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:41.643 23:36:02 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:41.643 23:36:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:41.643 23:36:02 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:41.643 23:36:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:41.643 23:36:02 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:41.643 23:36:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:41.643 23:36:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:41.643 23:36:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:41.643 23:36:02 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:22:41.643 23:36:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.643 23:36:02 -- target/tls.sh@28 -- # bdevperf_pid=287767 00:22:41.643 23:36:02 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.643 23:36:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:41.643 23:36:02 -- target/tls.sh@31 -- # waitforlisten 287767 /var/tmp/bdevperf.sock 00:22:41.643 23:36:02 -- common/autotest_common.sh@819 -- # '[' -z 287767 ']' 00:22:41.643 23:36:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.643 23:36:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:41.643 23:36:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.643 23:36:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:41.643 23:36:02 -- common/autotest_common.sh@10 -- # set +x 00:22:41.903 [2024-07-11 23:36:02.616029] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:41.903 [2024-07-11 23:36:02.616129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid287767 ] 00:22:41.903 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.903 [2024-07-11 23:36:02.686370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.903 [2024-07-11 23:36:02.771500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.304 23:36:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:43.304 23:36:03 -- common/autotest_common.sh@852 -- # return 0 00:22:43.304 23:36:03 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:43.304 [2024-07-11 23:36:04.028503] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.304 [2024-07-11 23:36:04.028552] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:43.304 request: 00:22:43.304 { 00:22:43.304 "name": "TLSTEST", 00:22:43.304 "trtype": "tcp", 00:22:43.304 "traddr": "10.0.0.2", 00:22:43.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.304 "adrfam": "ipv4", 00:22:43.304 "trsvcid": "4420", 00:22:43.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.304 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:43.304 "method": "bdev_nvme_attach_controller", 00:22:43.304 "req_id": 1 00:22:43.304 } 00:22:43.304 Got JSON-RPC error response 00:22:43.304 response: 00:22:43.304 { 00:22:43.304 "code": -22, 00:22:43.304 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:43.304 } 00:22:43.304 23:36:04 -- target/tls.sh@36 -- # killprocess 287767 00:22:43.304 23:36:04 -- common/autotest_common.sh@926 -- # '[' -z 287767 ']' 00:22:43.304 23:36:04 -- common/autotest_common.sh@930 -- # kill -0 287767 00:22:43.304 23:36:04 -- common/autotest_common.sh@931 -- # uname 00:22:43.304 23:36:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:43.304 23:36:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 287767 00:22:43.304 23:36:04 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:43.304 23:36:04 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:43.304 23:36:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 287767' 00:22:43.304 killing process with pid 287767 00:22:43.304 23:36:04 -- common/autotest_common.sh@945 -- # kill 287767 00:22:43.304 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.304 00:22:43.304 Latency(us) 00:22:43.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.304 =================================================================================================================== 00:22:43.304 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:43.304 23:36:04 -- common/autotest_common.sh@950 -- # wait 287767 00:22:43.563 23:36:04 -- target/tls.sh@37 -- # return 1 00:22:43.563 23:36:04 -- common/autotest_common.sh@643 -- # es=1 00:22:43.563 23:36:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:43.563 23:36:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:43.563 23:36:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:43.563 23:36:04 -- target/tls.sh@183 -- # killprocess 285788 00:22:43.563 23:36:04 -- common/autotest_common.sh@926 -- # '[' -z 285788 ']' 00:22:43.563 23:36:04 -- common/autotest_common.sh@930 -- # kill -0 285788 00:22:43.563 23:36:04 -- common/autotest_common.sh@931 -- # uname 00:22:43.563 23:36:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:43.563 23:36:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 285788 00:22:43.563 23:36:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:43.563 23:36:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:43.563 23:36:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 285788' 00:22:43.563 killing process with pid 285788 00:22:43.563 23:36:04 -- common/autotest_common.sh@945 -- # kill 285788 00:22:43.563 23:36:04 -- common/autotest_common.sh@950 -- # wait 285788 00:22:43.862 23:36:04 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:43.862 23:36:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:43.862 23:36:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:43.862 23:36:04 -- common/autotest_common.sh@10 -- # set +x 00:22:43.862 23:36:04 -- nvmf/common.sh@469 -- # nvmfpid=287978 00:22:43.862 23:36:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:43.862 23:36:04 -- nvmf/common.sh@470 -- # waitforlisten 287978 00:22:43.862 23:36:04 -- common/autotest_common.sh@819 -- # '[' -z 287978 ']' 00:22:43.862 23:36:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.862 23:36:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:43.862 23:36:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.862 23:36:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:43.862 23:36:04 -- common/autotest_common.sh@10 -- # set +x 00:22:43.862 [2024-07-11 23:36:04.621471] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:43.862 [2024-07-11 23:36:04.621568] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.862 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.862 [2024-07-11 23:36:04.706185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.144 [2024-07-11 23:36:04.810237] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:44.144 [2024-07-11 23:36:04.810387] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.144 [2024-07-11 23:36:04.810407] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.144 [2024-07-11 23:36:04.810436] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.144 [2024-07-11 23:36:04.810475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.079 23:36:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:45.079 23:36:05 -- common/autotest_common.sh@852 -- # return 0 00:22:45.079 23:36:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:45.079 23:36:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:45.079 23:36:05 -- common/autotest_common.sh@10 -- # set +x 00:22:45.079 23:36:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.079 23:36:05 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:45.079 23:36:05 -- common/autotest_common.sh@640 -- # local es=0 00:22:45.079 23:36:05 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:45.079 23:36:05 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:22:45.079 23:36:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:45.079 23:36:05 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:22:45.079 23:36:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:45.079 23:36:05 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:45.079 23:36:05 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:45.079 23:36:05 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:45.079 [2024-07-11 23:36:05.994280] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.079 23:36:06 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:45.644 23:36:06 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:45.644 [2024-07-11 23:36:06.563879] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:45.644 [2024-07-11 23:36:06.564205] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.644 23:36:06 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:46.210 malloc0 00:22:46.210 23:36:06 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:46.468 23:36:07 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:46.727 [2024-07-11 23:36:07.636895] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:46.727 [2024-07-11 23:36:07.636947] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:46.727 [2024-07-11 23:36:07.636978] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:22:46.727 request: 00:22:46.727 { 00:22:46.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.727 "host": "nqn.2016-06.io.spdk:host1", 00:22:46.727 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:46.727 "method": "nvmf_subsystem_add_host", 00:22:46.727 "req_id": 1 00:22:46.727 } 00:22:46.727 Got JSON-RPC error response 00:22:46.727 response: 00:22:46.727 { 00:22:46.727 "code": -32603, 00:22:46.727 "message": "Internal error" 00:22:46.727 } 00:22:46.727 23:36:07 -- common/autotest_common.sh@643 -- # es=1 00:22:46.727 23:36:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:46.727 23:36:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:46.727 23:36:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:46.727 23:36:07 -- target/tls.sh@189 -- # killprocess 287978 00:22:46.727 23:36:07 -- common/autotest_common.sh@926 -- # '[' -z 287978 ']' 00:22:46.727 23:36:07 -- common/autotest_common.sh@930 -- # kill -0 287978 00:22:46.727 23:36:07 -- common/autotest_common.sh@931 -- # uname 00:22:46.727 23:36:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:46.727 23:36:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 287978 00:22:46.985 23:36:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:46.985 23:36:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:46.985 23:36:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 287978' 00:22:46.985 killing process with pid 287978 00:22:46.985 23:36:07 -- common/autotest_common.sh@945 -- # kill 287978 00:22:46.985 23:36:07 -- common/autotest_common.sh@950 -- # wait 287978 00:22:47.245 23:36:07 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:47.245 23:36:07 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:22:47.245 23:36:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:47.245 23:36:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:47.245 23:36:07 -- common/autotest_common.sh@10 -- # set +x 00:22:47.245 23:36:07 -- nvmf/common.sh@469 -- # nvmfpid=288515 00:22:47.245 23:36:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:47.245 23:36:07 -- nvmf/common.sh@470 -- # waitforlisten 288515 00:22:47.245 23:36:07 -- common/autotest_common.sh@819 -- # '[' -z 288515 ']' 00:22:47.245 23:36:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.245 23:36:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:47.245 23:36:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.245 23:36:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:47.245 23:36:07 -- common/autotest_common.sh@10 -- # set +x 00:22:47.245 [2024-07-11 23:36:08.008340] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:47.245 [2024-07-11 23:36:08.008446] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.245 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.245 [2024-07-11 23:36:08.085206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.245 [2024-07-11 23:36:08.179562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:47.245 [2024-07-11 23:36:08.179739] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.245 [2024-07-11 23:36:08.179760] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.245 [2024-07-11 23:36:08.179775] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.245 [2024-07-11 23:36:08.179807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.183 23:36:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:48.183 23:36:09 -- common/autotest_common.sh@852 -- # return 0 00:22:48.183 23:36:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:48.183 23:36:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:48.183 23:36:09 -- common/autotest_common.sh@10 -- # set +x 00:22:48.183 23:36:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.183 23:36:09 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:48.183 23:36:09 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:48.183 23:36:09 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:48.752 [2024-07-11 23:36:09.411176] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.752 23:36:09 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:48.752 23:36:09 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:49.321 [2024-07-11 23:36:10.109149] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:49.321 [2024-07-11 23:36:10.109486] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.321 23:36:10 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:49.890 malloc0 00:22:49.890 23:36:10 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:50.149 23:36:11 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:50.716 23:36:11 -- target/tls.sh@197 -- # bdevperf_pid=289357 00:22:50.716 23:36:11 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:50.716 23:36:11 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:50.716 23:36:11 -- target/tls.sh@200 -- # waitforlisten 289357 /var/tmp/bdevperf.sock 00:22:50.716 23:36:11 -- common/autotest_common.sh@819 -- # '[' -z 289357 ']' 00:22:50.716 23:36:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.716 23:36:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:50.716 23:36:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.716 23:36:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:50.716 23:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:50.716 [2024-07-11 23:36:11.455999] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:50.716 [2024-07-11 23:36:11.456101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid289357 ] 00:22:50.716 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.716 [2024-07-11 23:36:11.530974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.716 [2024-07-11 23:36:11.622987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.974 23:36:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:50.974 23:36:11 -- common/autotest_common.sh@852 -- # return 0 00:22:50.974 23:36:11 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:51.233 [2024-07-11 23:36:12.022037] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:51.233 TLSTESTn1 00:22:51.233 23:36:12 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:51.799 23:36:12 -- target/tls.sh@205 -- # tgtconf='{ 00:22:51.799 "subsystems": [ 00:22:51.799 { 00:22:51.799 "subsystem": "iobuf", 00:22:51.799 "config": [ 00:22:51.799 { 00:22:51.799 "method": "iobuf_set_options", 00:22:51.799 "params": { 00:22:51.799 "small_pool_count": 8192, 00:22:51.799 "large_pool_count": 1024, 00:22:51.799 "small_bufsize": 8192, 00:22:51.799 "large_bufsize": 135168 00:22:51.799 } 00:22:51.799 } 00:22:51.799 ] 00:22:51.799 }, 00:22:51.799 { 00:22:51.799 "subsystem": "sock", 00:22:51.799 "config": [ 00:22:51.799 { 00:22:51.799 "method": "sock_impl_set_options", 00:22:51.799 "params": { 00:22:51.799 "impl_name": "posix", 00:22:51.799 "recv_buf_size": 2097152, 00:22:51.799 "send_buf_size": 2097152, 00:22:51.799 "enable_recv_pipe": true, 00:22:51.799 "enable_quickack": false, 00:22:51.799 "enable_placement_id": 0, 00:22:51.799 "enable_zerocopy_send_server": true, 00:22:51.799 "enable_zerocopy_send_client": false, 00:22:51.799 "zerocopy_threshold": 0, 00:22:51.799 "tls_version": 0, 00:22:51.800 "enable_ktls": false 00:22:51.800 } 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "method": "sock_impl_set_options", 00:22:51.800 "params": { 00:22:51.800 "impl_name": "ssl", 00:22:51.800 "recv_buf_size": 4096, 00:22:51.800 "send_buf_size": 4096, 00:22:51.800 "enable_recv_pipe": true, 00:22:51.800 "enable_quickack": false, 00:22:51.800 "enable_placement_id": 0, 00:22:51.800 "enable_zerocopy_send_server": true, 00:22:51.800 "enable_zerocopy_send_client": false, 00:22:51.800 "zerocopy_threshold": 0, 00:22:51.800 "tls_version": 0, 00:22:51.800 "enable_ktls": false 00:22:51.800 } 00:22:51.800 } 00:22:51.800 ] 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "subsystem": "vmd", 00:22:51.800 "config": [] 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "subsystem": "accel", 00:22:51.800 "config": [ 00:22:51.800 { 00:22:51.800 "method": "accel_set_options", 00:22:51.800 "params": { 00:22:51.800 "small_cache_size": 128, 00:22:51.800 "large_cache_size": 16, 00:22:51.800 "task_count": 2048, 00:22:51.800 "sequence_count": 2048, 00:22:51.800 "buf_count": 2048 00:22:51.800 } 00:22:51.800 } 00:22:51.800 ] 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "subsystem": "bdev", 00:22:51.800 "config": [ 00:22:51.800 { 00:22:51.800 "method": "bdev_set_options", 00:22:51.800 "params": { 00:22:51.800 "bdev_io_pool_size": 65535, 00:22:51.800 "bdev_io_cache_size": 256, 00:22:51.800 "bdev_auto_examine": true, 00:22:51.800 "iobuf_small_cache_size": 128, 00:22:51.800 "iobuf_large_cache_size": 16 00:22:51.800 } 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "method": "bdev_raid_set_options", 00:22:51.800 "params": { 00:22:51.800 "process_window_size_kb": 1024 00:22:51.800 } 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "method": "bdev_iscsi_set_options", 00:22:51.800 "params": { 00:22:51.800 "timeout_sec": 30 00:22:51.800 } 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "method": "bdev_nvme_set_options", 00:22:51.800 "params": { 00:22:51.800 "action_on_timeout": "none", 00:22:51.800 "timeout_us": 0, 00:22:51.800 "timeout_admin_us": 0, 00:22:51.800 "keep_alive_timeout_ms": 10000, 00:22:51.800 "transport_retry_count": 4, 00:22:51.800 "arbitration_burst": 0, 00:22:51.800 "low_priority_weight": 0, 00:22:51.800 "medium_priority_weight": 0, 00:22:51.800 "high_priority_weight": 0, 00:22:51.800 "nvme_adminq_poll_period_us": 10000, 00:22:51.800 "nvme_ioq_poll_period_us": 0, 00:22:51.800 "io_queue_requests": 0, 00:22:51.800 "delay_cmd_submit": true, 00:22:51.800 "bdev_retry_count": 3, 00:22:51.800 "transport_ack_timeout": 0, 00:22:51.800 "ctrlr_loss_timeout_sec": 0, 00:22:51.800 "reconnect_delay_sec": 0, 00:22:51.800 "fast_io_fail_timeout_sec": 0, 00:22:51.800 "generate_uuids": false, 00:22:51.800 "transport_tos": 0, 00:22:51.800 "io_path_stat": false, 00:22:51.800 "allow_accel_sequence": false 00:22:51.800 } 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "method": "bdev_nvme_set_hotplug", 00:22:51.800 "params": { 00:22:51.800 "period_us": 100000, 00:22:51.800 "enable": false 00:22:51.800 } 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "method": "bdev_malloc_create", 00:22:51.800 "params": { 00:22:51.800 "name": "malloc0", 00:22:51.800 "num_blocks": 8192, 00:22:51.800 "block_size": 4096, 00:22:51.800 "physical_block_size": 4096, 00:22:51.800 "uuid": "eb498829-2ae1-43af-8db0-65e446f70953", 00:22:51.800 "optimal_io_boundary": 0 00:22:51.800 } 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "method": "bdev_wait_for_examine" 00:22:51.800 } 00:22:51.800 ] 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "subsystem": "nbd", 00:22:51.800 "config": [] 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "subsystem": "scheduler", 00:22:51.800 "config": [ 00:22:51.800 { 00:22:51.800 "method": "framework_set_scheduler", 00:22:51.800 "params": { 00:22:51.800 "name": "static" 00:22:51.800 } 00:22:51.800 } 00:22:51.800 ] 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "subsystem": "nvmf", 00:22:51.800 "config": [ 00:22:51.800 { 00:22:51.800 "method": "nvmf_set_config", 00:22:51.800 "params": { 00:22:51.800 "discovery_filter": "match_any", 00:22:51.800 "admin_cmd_passthru": { 00:22:51.800 "identify_ctrlr": false 00:22:51.800 } 00:22:51.800 } 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "method": "nvmf_set_max_subsystems", 00:22:51.800 "params": { 00:22:51.800 "max_subsystems": 1024 00:22:51.800 } 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "method": "nvmf_set_crdt", 00:22:51.800 "params": { 00:22:51.800 "crdt1": 0, 00:22:51.800 "crdt2": 0, 00:22:51.800 "crdt3": 0 00:22:51.800 } 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "method": "nvmf_create_transport", 00:22:51.800 "params": { 00:22:51.800 "trtype": "TCP", 00:22:51.800 "max_queue_depth": 128, 00:22:51.800 "max_io_qpairs_per_ctrlr": 127, 00:22:51.800 "in_capsule_data_size": 4096, 00:22:51.800 "max_io_size": 131072, 00:22:51.800 "io_unit_size": 131072, 00:22:51.800 "max_aq_depth": 128, 00:22:51.800 "num_shared_buffers": 511, 00:22:51.800 "buf_cache_size": 4294967295, 00:22:51.800 "dif_insert_or_strip": false, 00:22:51.800 "zcopy": false, 00:22:51.800 "c2h_success": false, 00:22:51.800 "sock_priority": 0, 00:22:51.800 "abort_timeout_sec": 1 00:22:51.800 } 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "method": "nvmf_create_subsystem", 00:22:51.800 "params": { 00:22:51.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.800 "allow_any_host": false, 00:22:51.800 "serial_number": "SPDK00000000000001", 00:22:51.800 "model_number": "SPDK bdev Controller", 00:22:51.800 "max_namespaces": 10, 00:22:51.800 "min_cntlid": 1, 00:22:51.800 "max_cntlid": 65519, 00:22:51.800 "ana_reporting": false 00:22:51.800 } 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "method": "nvmf_subsystem_add_host", 00:22:51.800 "params": { 00:22:51.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.800 "host": "nqn.2016-06.io.spdk:host1", 00:22:51.800 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:51.800 } 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "method": "nvmf_subsystem_add_ns", 00:22:51.800 "params": { 00:22:51.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.800 "namespace": { 00:22:51.800 "nsid": 1, 00:22:51.800 "bdev_name": "malloc0", 00:22:51.800 "nguid": "EB4988292AE143AF8DB065E446F70953", 00:22:51.800 "uuid": "eb498829-2ae1-43af-8db0-65e446f70953" 00:22:51.800 } 00:22:51.800 } 00:22:51.800 }, 00:22:51.800 { 00:22:51.800 "method": "nvmf_subsystem_add_listener", 00:22:51.800 "params": { 00:22:51.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.800 "listen_address": { 00:22:51.800 "trtype": "TCP", 00:22:51.800 "adrfam": "IPv4", 00:22:51.800 "traddr": "10.0.0.2", 00:22:51.800 "trsvcid": "4420" 00:22:51.800 }, 00:22:51.800 "secure_channel": true 00:22:51.800 } 00:22:51.800 } 00:22:51.800 ] 00:22:51.800 } 00:22:51.800 ] 00:22:51.800 }' 00:22:51.800 23:36:12 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:52.365 23:36:13 -- target/tls.sh@206 -- # bdevperfconf='{ 00:22:52.365 "subsystems": [ 00:22:52.365 { 00:22:52.365 "subsystem": "iobuf", 00:22:52.365 "config": [ 00:22:52.365 { 00:22:52.365 "method": "iobuf_set_options", 00:22:52.365 "params": { 00:22:52.365 "small_pool_count": 8192, 00:22:52.365 "large_pool_count": 1024, 00:22:52.365 "small_bufsize": 8192, 00:22:52.365 "large_bufsize": 135168 00:22:52.365 } 00:22:52.365 } 00:22:52.365 ] 00:22:52.365 }, 00:22:52.365 { 00:22:52.366 "subsystem": "sock", 00:22:52.366 "config": [ 00:22:52.366 { 00:22:52.366 "method": "sock_impl_set_options", 00:22:52.366 "params": { 00:22:52.366 "impl_name": "posix", 00:22:52.366 "recv_buf_size": 2097152, 00:22:52.366 "send_buf_size": 2097152, 00:22:52.366 "enable_recv_pipe": true, 00:22:52.366 "enable_quickack": false, 00:22:52.366 "enable_placement_id": 0, 00:22:52.366 "enable_zerocopy_send_server": true, 00:22:52.366 "enable_zerocopy_send_client": false, 00:22:52.366 "zerocopy_threshold": 0, 00:22:52.366 "tls_version": 0, 00:22:52.366 "enable_ktls": false 00:22:52.366 } 00:22:52.366 }, 00:22:52.366 { 00:22:52.366 "method": "sock_impl_set_options", 00:22:52.366 "params": { 00:22:52.366 "impl_name": "ssl", 00:22:52.366 "recv_buf_size": 4096, 00:22:52.366 "send_buf_size": 4096, 00:22:52.366 "enable_recv_pipe": true, 00:22:52.366 "enable_quickack": false, 00:22:52.366 "enable_placement_id": 0, 00:22:52.366 "enable_zerocopy_send_server": true, 00:22:52.366 "enable_zerocopy_send_client": false, 00:22:52.366 "zerocopy_threshold": 0, 00:22:52.366 "tls_version": 0, 00:22:52.366 "enable_ktls": false 00:22:52.366 } 00:22:52.366 } 00:22:52.366 ] 00:22:52.366 }, 00:22:52.366 { 00:22:52.366 "subsystem": "vmd", 00:22:52.366 "config": [] 00:22:52.366 }, 00:22:52.366 { 00:22:52.366 "subsystem": "accel", 00:22:52.366 "config": [ 00:22:52.366 { 00:22:52.366 "method": "accel_set_options", 00:22:52.366 "params": { 00:22:52.366 "small_cache_size": 128, 00:22:52.366 "large_cache_size": 16, 00:22:52.366 "task_count": 2048, 00:22:52.366 "sequence_count": 2048, 00:22:52.366 "buf_count": 2048 00:22:52.366 } 00:22:52.366 } 00:22:52.366 ] 00:22:52.366 }, 00:22:52.366 { 00:22:52.366 "subsystem": "bdev", 00:22:52.366 "config": [ 00:22:52.366 { 00:22:52.366 "method": "bdev_set_options", 00:22:52.366 "params": { 00:22:52.366 "bdev_io_pool_size": 65535, 00:22:52.366 "bdev_io_cache_size": 256, 00:22:52.366 "bdev_auto_examine": true, 00:22:52.366 "iobuf_small_cache_size": 128, 00:22:52.366 "iobuf_large_cache_size": 16 00:22:52.366 } 00:22:52.366 }, 00:22:52.366 { 00:22:52.366 "method": "bdev_raid_set_options", 00:22:52.366 "params": { 00:22:52.366 "process_window_size_kb": 1024 00:22:52.366 } 00:22:52.366 }, 00:22:52.366 { 00:22:52.366 "method": "bdev_iscsi_set_options", 00:22:52.366 "params": { 00:22:52.366 "timeout_sec": 30 00:22:52.366 } 00:22:52.366 }, 00:22:52.366 { 00:22:52.366 "method": "bdev_nvme_set_options", 00:22:52.366 "params": { 00:22:52.366 "action_on_timeout": "none", 00:22:52.366 "timeout_us": 0, 00:22:52.366 "timeout_admin_us": 0, 00:22:52.366 "keep_alive_timeout_ms": 10000, 00:22:52.366 "transport_retry_count": 4, 00:22:52.366 "arbitration_burst": 0, 00:22:52.366 "low_priority_weight": 0, 00:22:52.366 "medium_priority_weight": 0, 00:22:52.366 "high_priority_weight": 0, 00:22:52.366 "nvme_adminq_poll_period_us": 10000, 00:22:52.366 "nvme_ioq_poll_period_us": 0, 00:22:52.366 "io_queue_requests": 512, 00:22:52.366 "delay_cmd_submit": true, 00:22:52.366 "bdev_retry_count": 3, 00:22:52.366 "transport_ack_timeout": 0, 00:22:52.366 "ctrlr_loss_timeout_sec": 0, 00:22:52.366 "reconnect_delay_sec": 0, 00:22:52.366 "fast_io_fail_timeout_sec": 0, 00:22:52.366 "generate_uuids": false, 00:22:52.366 "transport_tos": 0, 00:22:52.366 "io_path_stat": false, 00:22:52.366 "allow_accel_sequence": false 00:22:52.366 } 00:22:52.366 }, 00:22:52.366 { 00:22:52.366 "method": "bdev_nvme_attach_controller", 00:22:52.366 "params": { 00:22:52.366 "name": "TLSTEST", 00:22:52.366 "trtype": "TCP", 00:22:52.366 "adrfam": "IPv4", 00:22:52.366 "traddr": "10.0.0.2", 00:22:52.366 "trsvcid": "4420", 00:22:52.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.366 "prchk_reftag": false, 00:22:52.366 "prchk_guard": false, 00:22:52.366 "ctrlr_loss_timeout_sec": 0, 00:22:52.366 "reconnect_delay_sec": 0, 00:22:52.366 "fast_io_fail_timeout_sec": 0, 00:22:52.366 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:52.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.366 "hdgst": false, 00:22:52.366 "ddgst": false 00:22:52.366 } 00:22:52.366 }, 00:22:52.366 { 00:22:52.366 "method": "bdev_nvme_set_hotplug", 00:22:52.366 "params": { 00:22:52.366 "period_us": 100000, 00:22:52.366 "enable": false 00:22:52.366 } 00:22:52.366 }, 00:22:52.366 { 00:22:52.366 "method": "bdev_wait_for_examine" 00:22:52.366 } 00:22:52.366 ] 00:22:52.366 }, 00:22:52.366 { 00:22:52.366 "subsystem": "nbd", 00:22:52.366 "config": [] 00:22:52.366 } 00:22:52.366 ] 00:22:52.366 }' 00:22:52.366 23:36:13 -- target/tls.sh@208 -- # killprocess 289357 00:22:52.366 23:36:13 -- common/autotest_common.sh@926 -- # '[' -z 289357 ']' 00:22:52.366 23:36:13 -- common/autotest_common.sh@930 -- # kill -0 289357 00:22:52.366 23:36:13 -- common/autotest_common.sh@931 -- # uname 00:22:52.366 23:36:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:52.366 23:36:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 289357 00:22:52.366 23:36:13 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:52.366 23:36:13 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:52.366 23:36:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 289357' 00:22:52.366 killing process with pid 289357 00:22:52.366 23:36:13 -- common/autotest_common.sh@945 -- # kill 289357 00:22:52.366 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.366 00:22:52.366 Latency(us) 00:22:52.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.366 =================================================================================================================== 00:22:52.366 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:52.366 23:36:13 -- common/autotest_common.sh@950 -- # wait 289357 00:22:52.366 23:36:13 -- target/tls.sh@209 -- # killprocess 288515 00:22:52.366 23:36:13 -- common/autotest_common.sh@926 -- # '[' -z 288515 ']' 00:22:52.366 23:36:13 -- common/autotest_common.sh@930 -- # kill -0 288515 00:22:52.366 23:36:13 -- common/autotest_common.sh@931 -- # uname 00:22:52.366 23:36:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:52.366 23:36:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 288515 00:22:52.625 23:36:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:52.625 23:36:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:52.625 23:36:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 288515' 00:22:52.625 killing process with pid 288515 00:22:52.625 23:36:13 -- common/autotest_common.sh@945 -- # kill 288515 00:22:52.625 23:36:13 -- common/autotest_common.sh@950 -- # wait 288515 00:22:52.884 23:36:13 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:52.884 23:36:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:52.884 23:36:13 -- target/tls.sh@212 -- # echo '{ 00:22:52.884 "subsystems": [ 00:22:52.884 { 00:22:52.884 "subsystem": "iobuf", 00:22:52.884 "config": [ 00:22:52.884 { 00:22:52.884 "method": "iobuf_set_options", 00:22:52.884 "params": { 00:22:52.884 "small_pool_count": 8192, 00:22:52.884 "large_pool_count": 1024, 00:22:52.884 "small_bufsize": 8192, 00:22:52.884 "large_bufsize": 135168 00:22:52.884 } 00:22:52.884 } 00:22:52.884 ] 00:22:52.884 }, 00:22:52.884 { 00:22:52.884 "subsystem": "sock", 00:22:52.884 "config": [ 00:22:52.884 { 00:22:52.884 "method": "sock_impl_set_options", 00:22:52.884 "params": { 00:22:52.884 "impl_name": "posix", 00:22:52.884 "recv_buf_size": 2097152, 00:22:52.884 "send_buf_size": 2097152, 00:22:52.884 "enable_recv_pipe": true, 00:22:52.884 "enable_quickack": false, 00:22:52.884 "enable_placement_id": 0, 00:22:52.884 "enable_zerocopy_send_server": true, 00:22:52.884 "enable_zerocopy_send_client": false, 00:22:52.884 "zerocopy_threshold": 0, 00:22:52.884 "tls_version": 0, 00:22:52.884 "enable_ktls": false 00:22:52.884 } 00:22:52.884 }, 00:22:52.884 { 00:22:52.884 "method": "sock_impl_set_options", 00:22:52.884 "params": { 00:22:52.884 "impl_name": "ssl", 00:22:52.884 "recv_buf_size": 4096, 00:22:52.884 "send_buf_size": 4096, 00:22:52.884 "enable_recv_pipe": true, 00:22:52.884 "enable_quickack": false, 00:22:52.884 "enable_placement_id": 0, 00:22:52.884 "enable_zerocopy_send_server": true, 00:22:52.884 "enable_zerocopy_send_client": false, 00:22:52.884 "zerocopy_threshold": 0, 00:22:52.884 "tls_version": 0, 00:22:52.884 "enable_ktls": false 00:22:52.884 } 00:22:52.884 } 00:22:52.884 ] 00:22:52.884 }, 00:22:52.884 { 00:22:52.884 "subsystem": "vmd", 00:22:52.884 "config": [] 00:22:52.884 }, 00:22:52.884 { 00:22:52.884 "subsystem": "accel", 00:22:52.884 "config": [ 00:22:52.884 { 00:22:52.884 "method": "accel_set_options", 00:22:52.884 "params": { 00:22:52.884 "small_cache_size": 128, 00:22:52.884 "large_cache_size": 16, 00:22:52.884 "task_count": 2048, 00:22:52.884 "sequence_count": 2048, 00:22:52.884 "buf_count": 2048 00:22:52.884 } 00:22:52.884 } 00:22:52.884 ] 00:22:52.884 }, 00:22:52.884 { 00:22:52.884 "subsystem": "bdev", 00:22:52.884 "config": [ 00:22:52.884 { 00:22:52.884 "method": "bdev_set_options", 00:22:52.884 "params": { 00:22:52.884 "bdev_io_pool_size": 65535, 00:22:52.884 "bdev_io_cache_size": 256, 00:22:52.884 "bdev_auto_examine": true, 00:22:52.884 "iobuf_small_cache_size": 128, 00:22:52.884 "iobuf_large_cache_size": 16 00:22:52.884 } 00:22:52.884 }, 00:22:52.884 { 00:22:52.884 "method": "bdev_raid_set_options", 00:22:52.884 "params": { 00:22:52.884 "process_window_size_kb": 1024 00:22:52.884 } 00:22:52.884 }, 00:22:52.884 { 00:22:52.884 "method": "bdev_iscsi_set_options", 00:22:52.884 "params": { 00:22:52.884 "timeout_sec": 30 00:22:52.884 } 00:22:52.884 }, 00:22:52.884 { 00:22:52.884 "method": "bdev_nvme_set_options", 00:22:52.884 "params": { 00:22:52.884 "action_on_timeout": "none", 00:22:52.884 "timeout_us": 0, 00:22:52.884 "timeout_admin_us": 0, 00:22:52.884 "keep_alive_timeout_ms": 10000, 00:22:52.884 "transport_retry_count": 4, 00:22:52.884 "arbitration_burst": 0, 00:22:52.884 "low_priority_weight": 0, 00:22:52.884 "medium_priority_weight": 0, 00:22:52.884 "high_priority_weight": 0, 00:22:52.884 "nvme_adminq_poll_period_us": 10000, 00:22:52.884 "nvme_ioq_poll_period_us": 0, 00:22:52.884 "io_queue_requests": 0, 00:22:52.885 "delay_cmd_submit": true, 00:22:52.885 "bdev_retry_count": 3, 00:22:52.885 "transport_ack_timeout": 0, 00:22:52.885 "ctrlr_loss_timeout_sec": 0, 00:22:52.885 "reconnect_delay_sec": 0, 00:22:52.885 "fast_io_fail_timeout_sec": 0, 00:22:52.885 "generate_uuids": false, 00:22:52.885 "transport_tos": 0, 00:22:52.885 "io_path_stat": false, 00:22:52.885 "allow_accel_sequence": false 00:22:52.885 } 00:22:52.885 }, 00:22:52.885 { 00:22:52.885 "method": "bdev_nvme_set_hotplug", 00:22:52.885 "params": { 00:22:52.885 "period_us": 100000, 00:22:52.885 "enable": false 00:22:52.885 } 00:22:52.885 }, 00:22:52.885 { 00:22:52.885 "method": "bdev_malloc_create", 00:22:52.885 "params": { 00:22:52.885 "name": "malloc0", 00:22:52.885 "num_blocks": 8192, 00:22:52.885 "block_size": 4096, 00:22:52.885 "physical_block_size": 4096, 00:22:52.885 "uuid": "eb498829-2ae1-43af-8db0-65e446f70953", 00:22:52.885 "optimal_io_boundary": 0 00:22:52.885 } 00:22:52.885 }, 00:22:52.885 { 00:22:52.885 "method": "bdev_wait_for_examine" 00:22:52.885 } 00:22:52.885 ] 00:22:52.885 }, 00:22:52.885 { 00:22:52.885 "subsystem": "nbd", 00:22:52.885 "config": [] 00:22:52.885 }, 00:22:52.885 { 00:22:52.885 "subsystem": "scheduler", 00:22:52.885 "config": [ 00:22:52.885 { 00:22:52.885 "method": "framework_set_scheduler", 00:22:52.885 "params": { 00:22:52.885 "name": "static" 00:22:52.885 } 00:22:52.885 } 00:22:52.885 ] 00:22:52.885 }, 00:22:52.885 { 00:22:52.885 "subsystem": "nvmf", 00:22:52.885 "config": [ 00:22:52.885 { 00:22:52.885 "method": "nvmf_set_config", 00:22:52.885 "params": { 00:22:52.885 "discovery_filter": "match_any", 00:22:52.885 "admin_cmd_passthru": { 00:22:52.885 "identify_ctrlr": false 00:22:52.885 } 00:22:52.885 } 00:22:52.885 }, 00:22:52.885 { 00:22:52.885 "method": "nvmf_set_max_subsystems", 00:22:52.885 "params": { 00:22:52.885 "max_subsystems": 1024 00:22:52.885 } 00:22:52.885 }, 00:22:52.885 { 00:22:52.885 "method": "nvmf_set_crdt", 00:22:52.885 "params": { 00:22:52.885 "crdt1": 0, 00:22:52.885 "crdt2": 0, 00:22:52.885 "crdt3": 0 00:22:52.885 } 00:22:52.885 }, 00:22:52.885 { 00:22:52.885 "method": "nvmf_create_transport", 00:22:52.885 "params": { 00:22:52.885 "trtype": "TCP", 00:22:52.885 "max_queue_depth": 128, 00:22:52.885 "max_io_qpairs_per_ctrlr": 127, 00:22:52.885 "in_capsule_data_size": 4096, 00:22:52.885 "max_io_size": 131072, 00:22:52.885 "io_unit_size": 131072, 00:22:52.885 "max_aq_depth": 128, 00:22:52.885 "num_shared_buffers": 511, 00:22:52.885 "buf_cache_size": 4294967295, 00:22:52.885 "dif_insert_or_strip": false, 00:22:52.885 "zcopy": false, 00:22:52.885 "c2h_success": false, 00:22:52.885 "sock_priority": 0, 00:22:52.885 "abort_timeout_sec": 1 00:22:52.885 } 00:22:52.885 }, 00:22:52.885 { 00:22:52.885 "method": "nvmf_create_subsystem", 00:22:52.885 "params": { 00:22:52.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.885 "allow_any_host": false, 00:22:52.885 "serial_number": "SPDK00000000000001", 00:22:52.885 "model_number": "SPDK bdev Controller", 00:22:52.885 "max_namespaces": 10, 00:22:52.885 "min_cntlid": 1, 00:22:52.885 "max_cntlid": 65519, 00:22:52.885 "ana_reporting": false 00:22:52.885 } 00:22:52.885 }, 00:22:52.885 { 00:22:52.885 "method": "nvmf_subsystem_add_host", 00:22:52.885 "params": { 00:22:52.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.885 "host": "nqn.2016-06.io.spdk:host1", 00:22:52.885 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:52.885 } 00:22:52.885 }, 00:22:52.885 { 00:22:52.885 "method": "nvmf_subsystem_add_ns", 00:22:52.885 "params": { 00:22:52.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.885 "namespace": { 00:22:52.885 "nsid": 1, 00:22:52.885 "bdev_name": "malloc0", 00:22:52.885 "nguid": "EB4988292AE143AF8DB065E446F70953", 00:22:52.885 "uuid": "eb498829-2ae1-43af-8db0-65e446f70953" 00:22:52.885 } 00:22:52.885 } 00:22:52.885 }, 00:22:52.885 { 00:22:52.885 "method": "nvmf_subsystem_add_listener", 00:22:52.885 "params": { 00:22:52.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.885 "listen_address": { 00:22:52.885 "trtype": "TCP", 00:22:52.885 "adrfam": "IPv4", 00:22:52.885 "traddr": "10.0.0.2", 00:22:52.885 "trsvcid": "4420" 00:22:52.885 }, 00:22:52.885 "secure_channel": true 00:22:52.885 } 00:22:52.885 } 00:22:52.885 ] 00:22:52.885 } 00:22:52.885 ] 00:22:52.885 }' 00:22:52.885 23:36:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:52.885 23:36:13 -- common/autotest_common.sh@10 -- # set +x 00:22:52.885 23:36:13 -- nvmf/common.sh@469 -- # nvmfpid=289644 00:22:52.885 23:36:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:52.885 23:36:13 -- nvmf/common.sh@470 -- # waitforlisten 289644 00:22:52.885 23:36:13 -- common/autotest_common.sh@819 -- # '[' -z 289644 ']' 00:22:52.885 23:36:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.885 23:36:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:52.885 23:36:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.885 23:36:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:52.885 23:36:13 -- common/autotest_common.sh@10 -- # set +x 00:22:52.885 [2024-07-11 23:36:13.662337] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:52.885 [2024-07-11 23:36:13.662437] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.885 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.885 [2024-07-11 23:36:13.746411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.144 [2024-07-11 23:36:13.845651] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:53.144 [2024-07-11 23:36:13.845844] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.144 [2024-07-11 23:36:13.845869] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.144 [2024-07-11 23:36:13.845888] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.144 [2024-07-11 23:36:13.845943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.144 [2024-07-11 23:36:14.092178] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.403 [2024-07-11 23:36:14.124149] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:53.403 [2024-07-11 23:36:14.124467] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.972 23:36:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:53.972 23:36:14 -- common/autotest_common.sh@852 -- # return 0 00:22:53.972 23:36:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:53.972 23:36:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:53.972 23:36:14 -- common/autotest_common.sh@10 -- # set +x 00:22:53.972 23:36:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.972 23:36:14 -- target/tls.sh@216 -- # bdevperf_pid=289799 00:22:53.972 23:36:14 -- target/tls.sh@217 -- # waitforlisten 289799 /var/tmp/bdevperf.sock 00:22:53.972 23:36:14 -- common/autotest_common.sh@819 -- # '[' -z 289799 ']' 00:22:53.972 23:36:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.972 23:36:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:53.972 23:36:14 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:53.972 23:36:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.972 23:36:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:53.972 23:36:14 -- target/tls.sh@213 -- # echo '{ 00:22:53.972 "subsystems": [ 00:22:53.972 { 00:22:53.972 "subsystem": "iobuf", 00:22:53.972 "config": [ 00:22:53.972 { 00:22:53.972 "method": "iobuf_set_options", 00:22:53.972 "params": { 00:22:53.972 "small_pool_count": 8192, 00:22:53.972 "large_pool_count": 1024, 00:22:53.972 "small_bufsize": 8192, 00:22:53.972 "large_bufsize": 135168 00:22:53.972 } 00:22:53.972 } 00:22:53.972 ] 00:22:53.972 }, 00:22:53.972 { 00:22:53.972 "subsystem": "sock", 00:22:53.972 "config": [ 00:22:53.972 { 00:22:53.972 "method": "sock_impl_set_options", 00:22:53.972 "params": { 00:22:53.972 "impl_name": "posix", 00:22:53.972 "recv_buf_size": 2097152, 00:22:53.972 "send_buf_size": 2097152, 00:22:53.972 "enable_recv_pipe": true, 00:22:53.972 "enable_quickack": false, 00:22:53.972 "enable_placement_id": 0, 00:22:53.972 "enable_zerocopy_send_server": true, 00:22:53.972 "enable_zerocopy_send_client": false, 00:22:53.972 "zerocopy_threshold": 0, 00:22:53.972 "tls_version": 0, 00:22:53.972 "enable_ktls": false 00:22:53.972 } 00:22:53.972 }, 00:22:53.972 { 00:22:53.972 "method": "sock_impl_set_options", 00:22:53.972 "params": { 00:22:53.972 "impl_name": "ssl", 00:22:53.972 "recv_buf_size": 4096, 00:22:53.972 "send_buf_size": 4096, 00:22:53.972 "enable_recv_pipe": true, 00:22:53.972 "enable_quickack": false, 00:22:53.972 "enable_placement_id": 0, 00:22:53.972 "enable_zerocopy_send_server": true, 00:22:53.972 "enable_zerocopy_send_client": false, 00:22:53.972 "zerocopy_threshold": 0, 00:22:53.972 "tls_version": 0, 00:22:53.972 "enable_ktls": false 00:22:53.972 } 00:22:53.972 } 00:22:53.972 ] 00:22:53.972 }, 00:22:53.972 { 00:22:53.972 "subsystem": "vmd", 00:22:53.972 "config": [] 00:22:53.972 }, 00:22:53.972 { 00:22:53.972 "subsystem": "accel", 00:22:53.972 "config": [ 00:22:53.972 { 00:22:53.972 "method": "accel_set_options", 00:22:53.972 "params": { 00:22:53.972 "small_cache_size": 128, 00:22:53.972 "large_cache_size": 16, 00:22:53.972 "task_count": 2048, 00:22:53.972 "sequence_count": 2048, 00:22:53.972 "buf_count": 2048 00:22:53.972 } 00:22:53.972 } 00:22:53.972 ] 00:22:53.972 }, 00:22:53.972 { 00:22:53.972 "subsystem": "bdev", 00:22:53.972 "config": [ 00:22:53.972 { 00:22:53.972 "method": "bdev_set_options", 00:22:53.972 "params": { 00:22:53.972 "bdev_io_pool_size": 65535, 00:22:53.972 "bdev_io_cache_size": 256, 00:22:53.972 "bdev_auto_examine": true, 00:22:53.972 "iobuf_small_cache_size": 128, 00:22:53.972 "iobuf_large_cache_size": 16 00:22:53.972 } 00:22:53.972 }, 00:22:53.972 { 00:22:53.972 "method": "bdev_raid_set_options", 00:22:53.972 "params": { 00:22:53.972 "process_window_size_kb": 1024 00:22:53.972 } 00:22:53.972 }, 00:22:53.972 { 00:22:53.972 "method": "bdev_iscsi_set_options", 00:22:53.972 "params": { 00:22:53.972 "timeout_sec": 30 00:22:53.972 } 00:22:53.972 }, 00:22:53.972 { 00:22:53.972 "method": "bdev_nvme_set_options", 00:22:53.973 "params": { 00:22:53.973 "action_on_timeout": "none", 00:22:53.973 "timeout_us": 0, 00:22:53.973 "timeout_admin_us": 0, 00:22:53.973 "keep_alive_timeout_ms": 10000, 00:22:53.973 "transport_retry_count": 4, 00:22:53.973 "arbitration_burst": 0, 00:22:53.973 "low_priority_weight": 0, 00:22:53.973 "medium_priority_weight": 0, 00:22:53.973 "high_priority_weight": 0, 00:22:53.973 "nvme_adminq_poll_period_us": 10000, 00:22:53.973 "nvme_ioq_poll_period_us": 0, 00:22:53.973 "io_queue_requests": 512, 00:22:53.973 "delay_cmd_submit": true, 00:22:53.973 "bdev_retry_count": 3, 00:22:53.973 "transport_ack_timeout": 0, 00:22:53.973 "ctrlr_loss_timeout_sec": 0, 00:22:53.973 "reconnect_delay_sec": 0, 00:22:53.973 "fast_io_fail_timeout_sec": 0, 00:22:53.973 "generate_uuids": false, 00:22:53.973 "transport_tos": 0, 00:22:53.973 "io_path_stat": false, 00:22:53.973 "allow_accel_sequence": false 00:22:53.973 } 00:22:53.973 }, 00:22:53.973 { 00:22:53.973 "method": "bdev_nvme_attach_controller", 00:22:53.973 "params": { 00:22:53.973 "name": "TLSTEST", 00:22:53.973 "trtype": "TCP", 00:22:53.973 "adrfam": "IPv4", 00:22:53.973 "traddr": "10.0.0.2", 00:22:53.973 "trsvcid": "4420", 00:22:53.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.973 "prchk_reftag": false, 00:22:53.973 "prchk_guard": false, 00:22:53.973 "ctrlr_loss_timeout_sec": 0, 00:22:53.973 "reconnect_delay_sec": 0, 00:22:53.973 "fast_io_fail_timeout_sec": 0, 00:22:53.973 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:53.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.973 "hdgst": 23:36:14 -- common/autotest_common.sh@10 -- # set +x 00:22:53.973 false, 00:22:53.973 "ddgst": false 00:22:53.973 } 00:22:53.973 }, 00:22:53.973 { 00:22:53.973 "method": "bdev_nvme_set_hotplug", 00:22:53.973 "params": { 00:22:53.973 "period_us": 100000, 00:22:53.973 "enable": false 00:22:53.973 } 00:22:53.973 }, 00:22:53.973 { 00:22:53.973 "method": "bdev_wait_for_examine" 00:22:53.973 } 00:22:53.973 ] 00:22:53.973 }, 00:22:53.973 { 00:22:53.973 "subsystem": "nbd", 00:22:53.973 "config": [] 00:22:53.973 } 00:22:53.973 ] 00:22:53.973 }' 00:22:53.973 [2024-07-11 23:36:14.793545] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:53.973 [2024-07-11 23:36:14.793632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid289799 ] 00:22:53.973 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.973 [2024-07-11 23:36:14.861669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.231 [2024-07-11 23:36:14.954521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.231 [2024-07-11 23:36:15.113841] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.168 23:36:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:55.168 23:36:15 -- common/autotest_common.sh@852 -- # return 0 00:22:55.168 23:36:15 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:55.168 Running I/O for 10 seconds... 00:23:07.397 00:23:07.397 Latency(us) 00:23:07.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.397 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:07.397 Verification LBA range: start 0x0 length 0x2000 00:23:07.397 TLSTESTn1 : 10.03 2157.40 8.43 0.00 0.00 59245.95 4344.79 64468.01 00:23:07.397 =================================================================================================================== 00:23:07.397 Total : 2157.40 8.43 0.00 0.00 59245.95 4344.79 64468.01 00:23:07.397 0 00:23:07.397 23:36:26 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:07.397 23:36:26 -- target/tls.sh@223 -- # killprocess 289799 00:23:07.397 23:36:26 -- common/autotest_common.sh@926 -- # '[' -z 289799 ']' 00:23:07.397 23:36:26 -- common/autotest_common.sh@930 -- # kill -0 289799 00:23:07.397 23:36:26 -- common/autotest_common.sh@931 -- # uname 00:23:07.397 23:36:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:07.397 23:36:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 289799 00:23:07.397 23:36:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:07.397 23:36:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:07.397 23:36:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 289799' 00:23:07.397 killing process with pid 289799 00:23:07.397 23:36:26 -- common/autotest_common.sh@945 -- # kill 289799 00:23:07.397 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.397 00:23:07.397 Latency(us) 00:23:07.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.397 =================================================================================================================== 00:23:07.397 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.397 23:36:26 -- common/autotest_common.sh@950 -- # wait 289799 00:23:07.397 23:36:26 -- target/tls.sh@224 -- # killprocess 289644 00:23:07.397 23:36:26 -- common/autotest_common.sh@926 -- # '[' -z 289644 ']' 00:23:07.397 23:36:26 -- common/autotest_common.sh@930 -- # kill -0 289644 00:23:07.398 23:36:26 -- common/autotest_common.sh@931 -- # uname 00:23:07.398 23:36:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:07.398 23:36:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 289644 00:23:07.398 23:36:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:07.398 23:36:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:07.398 23:36:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 289644' 00:23:07.398 killing process with pid 289644 00:23:07.398 23:36:26 -- common/autotest_common.sh@945 -- # kill 289644 00:23:07.398 23:36:26 -- common/autotest_common.sh@950 -- # wait 289644 00:23:07.398 23:36:26 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:23:07.398 23:36:26 -- target/tls.sh@227 -- # cleanup 00:23:07.398 23:36:26 -- target/tls.sh@15 -- # process_shm --id 0 00:23:07.398 23:36:26 -- common/autotest_common.sh@796 -- # type=--id 00:23:07.398 23:36:26 -- common/autotest_common.sh@797 -- # id=0 00:23:07.398 23:36:26 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:23:07.398 23:36:26 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:07.398 23:36:26 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:23:07.398 23:36:26 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:23:07.398 23:36:26 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:23:07.398 23:36:26 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:07.398 nvmf_trace.0 00:23:07.398 23:36:26 -- common/autotest_common.sh@811 -- # return 0 00:23:07.398 23:36:26 -- target/tls.sh@16 -- # killprocess 289799 00:23:07.398 23:36:26 -- common/autotest_common.sh@926 -- # '[' -z 289799 ']' 00:23:07.398 23:36:26 -- common/autotest_common.sh@930 -- # kill -0 289799 00:23:07.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (289799) - No such process 00:23:07.398 23:36:26 -- common/autotest_common.sh@953 -- # echo 'Process with pid 289799 is not found' 00:23:07.398 Process with pid 289799 is not found 00:23:07.398 23:36:26 -- target/tls.sh@17 -- # nvmftestfini 00:23:07.398 23:36:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:07.398 23:36:26 -- nvmf/common.sh@116 -- # sync 00:23:07.398 23:36:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:07.398 23:36:26 -- nvmf/common.sh@119 -- # set +e 00:23:07.398 23:36:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:07.398 23:36:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:07.398 rmmod nvme_tcp 00:23:07.398 rmmod nvme_fabrics 00:23:07.398 rmmod nvme_keyring 00:23:07.398 23:36:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:07.398 23:36:26 -- nvmf/common.sh@123 -- # set -e 00:23:07.398 23:36:26 -- nvmf/common.sh@124 -- # return 0 00:23:07.398 23:36:26 -- nvmf/common.sh@477 -- # '[' -n 289644 ']' 00:23:07.398 23:36:26 -- nvmf/common.sh@478 -- # killprocess 289644 00:23:07.398 23:36:26 -- common/autotest_common.sh@926 -- # '[' -z 289644 ']' 00:23:07.398 23:36:26 -- common/autotest_common.sh@930 -- # kill -0 289644 00:23:07.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (289644) - No such process 00:23:07.398 23:36:26 -- common/autotest_common.sh@953 -- # echo 'Process with pid 289644 is not found' 00:23:07.398 Process with pid 289644 is not found 00:23:07.398 23:36:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:07.398 23:36:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:07.398 23:36:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:07.398 23:36:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.398 23:36:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:07.398 23:36:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.398 23:36:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.398 23:36:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.966 23:36:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:07.966 23:36:28 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:23:07.966 00:23:07.966 real 1m19.892s 00:23:07.966 user 2m5.876s 00:23:07.966 sys 0m29.825s 00:23:07.966 23:36:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:07.966 23:36:28 -- common/autotest_common.sh@10 -- # set +x 00:23:07.966 ************************************ 00:23:07.966 END TEST nvmf_tls 00:23:07.966 ************************************ 00:23:08.226 23:36:28 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:08.226 23:36:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:08.226 23:36:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:08.226 23:36:28 -- common/autotest_common.sh@10 -- # set +x 00:23:08.226 ************************************ 00:23:08.226 START TEST nvmf_fips 00:23:08.226 ************************************ 00:23:08.226 23:36:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:08.226 * Looking for test storage... 00:23:08.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:08.226 23:36:29 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:08.226 23:36:29 -- nvmf/common.sh@7 -- # uname -s 00:23:08.226 23:36:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.226 23:36:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.226 23:36:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.226 23:36:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.226 23:36:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.227 23:36:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.227 23:36:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.227 23:36:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.227 23:36:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.227 23:36:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.227 23:36:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:08.227 23:36:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:08.227 23:36:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.227 23:36:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.227 23:36:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:08.227 23:36:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:08.227 23:36:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.227 23:36:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.227 23:36:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.227 23:36:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.227 23:36:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.227 23:36:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.227 23:36:29 -- paths/export.sh@5 -- # export PATH 00:23:08.227 23:36:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.227 23:36:29 -- nvmf/common.sh@46 -- # : 0 00:23:08.227 23:36:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:08.227 23:36:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:08.227 23:36:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:08.227 23:36:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.227 23:36:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.227 23:36:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:08.227 23:36:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:08.227 23:36:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:08.227 23:36:29 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:08.227 23:36:29 -- fips/fips.sh@89 -- # check_openssl_version 00:23:08.227 23:36:29 -- fips/fips.sh@83 -- # local target=3.0.0 00:23:08.227 23:36:29 -- fips/fips.sh@85 -- # openssl version 00:23:08.227 23:36:29 -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:08.227 23:36:29 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:08.227 23:36:29 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:08.227 23:36:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:08.227 23:36:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:08.227 23:36:29 -- scripts/common.sh@335 -- # IFS=.-: 00:23:08.227 23:36:29 -- scripts/common.sh@335 -- # read -ra ver1 00:23:08.227 23:36:29 -- scripts/common.sh@336 -- # IFS=.-: 00:23:08.227 23:36:29 -- scripts/common.sh@336 -- # read -ra ver2 00:23:08.227 23:36:29 -- scripts/common.sh@337 -- # local 'op=>=' 00:23:08.227 23:36:29 -- scripts/common.sh@339 -- # ver1_l=3 00:23:08.227 23:36:29 -- scripts/common.sh@340 -- # ver2_l=3 00:23:08.227 23:36:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:08.227 23:36:29 -- scripts/common.sh@343 -- # case "$op" in 00:23:08.227 23:36:29 -- scripts/common.sh@347 -- # : 1 00:23:08.227 23:36:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:08.227 23:36:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.227 23:36:29 -- scripts/common.sh@364 -- # decimal 3 00:23:08.227 23:36:29 -- scripts/common.sh@352 -- # local d=3 00:23:08.227 23:36:29 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:08.227 23:36:29 -- scripts/common.sh@354 -- # echo 3 00:23:08.227 23:36:29 -- scripts/common.sh@364 -- # ver1[v]=3 00:23:08.227 23:36:29 -- scripts/common.sh@365 -- # decimal 3 00:23:08.227 23:36:29 -- scripts/common.sh@352 -- # local d=3 00:23:08.227 23:36:29 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:08.227 23:36:29 -- scripts/common.sh@354 -- # echo 3 00:23:08.227 23:36:29 -- scripts/common.sh@365 -- # ver2[v]=3 00:23:08.227 23:36:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:08.227 23:36:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:08.227 23:36:29 -- scripts/common.sh@363 -- # (( v++ )) 00:23:08.227 23:36:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.227 23:36:29 -- scripts/common.sh@364 -- # decimal 0 00:23:08.227 23:36:29 -- scripts/common.sh@352 -- # local d=0 00:23:08.227 23:36:29 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:08.227 23:36:29 -- scripts/common.sh@354 -- # echo 0 00:23:08.227 23:36:29 -- scripts/common.sh@364 -- # ver1[v]=0 00:23:08.227 23:36:29 -- scripts/common.sh@365 -- # decimal 0 00:23:08.227 23:36:29 -- scripts/common.sh@352 -- # local d=0 00:23:08.227 23:36:29 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:08.227 23:36:29 -- scripts/common.sh@354 -- # echo 0 00:23:08.227 23:36:29 -- scripts/common.sh@365 -- # ver2[v]=0 00:23:08.227 23:36:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:08.227 23:36:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:08.227 23:36:29 -- scripts/common.sh@363 -- # (( v++ )) 00:23:08.227 23:36:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.227 23:36:29 -- scripts/common.sh@364 -- # decimal 9 00:23:08.227 23:36:29 -- scripts/common.sh@352 -- # local d=9 00:23:08.227 23:36:29 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:08.227 23:36:29 -- scripts/common.sh@354 -- # echo 9 00:23:08.227 23:36:29 -- scripts/common.sh@364 -- # ver1[v]=9 00:23:08.227 23:36:29 -- scripts/common.sh@365 -- # decimal 0 00:23:08.227 23:36:29 -- scripts/common.sh@352 -- # local d=0 00:23:08.227 23:36:29 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:08.227 23:36:29 -- scripts/common.sh@354 -- # echo 0 00:23:08.227 23:36:29 -- scripts/common.sh@365 -- # ver2[v]=0 00:23:08.227 23:36:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:08.227 23:36:29 -- scripts/common.sh@366 -- # return 0 00:23:08.227 23:36:29 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:08.227 23:36:29 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:08.227 23:36:29 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:08.227 23:36:29 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:08.227 23:36:29 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:08.227 23:36:29 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:08.227 23:36:29 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:08.227 23:36:29 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:23:08.227 23:36:29 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:23:08.227 23:36:29 -- fips/fips.sh@114 -- # build_openssl_config 00:23:08.227 23:36:29 -- fips/fips.sh@37 -- # cat 00:23:08.227 23:36:29 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:08.227 23:36:29 -- fips/fips.sh@58 -- # cat - 00:23:08.227 23:36:29 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:08.227 23:36:29 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:08.227 23:36:29 -- fips/fips.sh@117 -- # mapfile -t providers 00:23:08.227 23:36:29 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:23:08.227 23:36:29 -- fips/fips.sh@117 -- # grep name 00:23:08.227 23:36:29 -- fips/fips.sh@117 -- # openssl list -providers 00:23:08.227 23:36:29 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:08.227 23:36:29 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:08.227 23:36:29 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:08.513 23:36:29 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:08.513 23:36:29 -- fips/fips.sh@128 -- # : 00:23:08.513 23:36:29 -- common/autotest_common.sh@640 -- # local es=0 00:23:08.513 23:36:29 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:08.513 23:36:29 -- common/autotest_common.sh@628 -- # local arg=openssl 00:23:08.513 23:36:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:08.513 23:36:29 -- common/autotest_common.sh@632 -- # type -t openssl 00:23:08.513 23:36:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:08.514 23:36:29 -- common/autotest_common.sh@634 -- # type -P openssl 00:23:08.514 23:36:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:08.514 23:36:29 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:23:08.514 23:36:29 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:23:08.514 23:36:29 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:23:08.514 Error setting digest 00:23:08.514 001292CE847F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:08.514 001292CE847F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:08.514 23:36:29 -- common/autotest_common.sh@643 -- # es=1 00:23:08.514 23:36:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:08.514 23:36:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:08.514 23:36:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:08.514 23:36:29 -- fips/fips.sh@131 -- # nvmftestinit 00:23:08.514 23:36:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:08.514 23:36:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.514 23:36:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:08.514 23:36:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:08.514 23:36:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:08.514 23:36:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.514 23:36:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.514 23:36:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.514 23:36:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:08.514 23:36:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:08.514 23:36:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:08.514 23:36:29 -- common/autotest_common.sh@10 -- # set +x 00:23:11.078 23:36:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:11.078 23:36:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:11.078 23:36:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:11.078 23:36:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:11.078 23:36:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:11.078 23:36:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:11.078 23:36:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:11.078 23:36:31 -- nvmf/common.sh@294 -- # net_devs=() 00:23:11.078 23:36:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:11.078 23:36:31 -- nvmf/common.sh@295 -- # e810=() 00:23:11.078 23:36:31 -- nvmf/common.sh@295 -- # local -ga e810 00:23:11.078 23:36:31 -- nvmf/common.sh@296 -- # x722=() 00:23:11.078 23:36:31 -- nvmf/common.sh@296 -- # local -ga x722 00:23:11.078 23:36:31 -- nvmf/common.sh@297 -- # mlx=() 00:23:11.078 23:36:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:11.078 23:36:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.078 23:36:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.078 23:36:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.078 23:36:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.078 23:36:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.078 23:36:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.078 23:36:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.078 23:36:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.078 23:36:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.078 23:36:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.078 23:36:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.078 23:36:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:11.078 23:36:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:11.078 23:36:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:11.078 23:36:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:11.078 23:36:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:11.078 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:11.078 23:36:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:11.078 23:36:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:11.078 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:11.078 23:36:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:11.078 23:36:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:11.078 23:36:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:11.078 23:36:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.078 23:36:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:11.078 23:36:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.078 23:36:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:11.078 Found net devices under 0000:84:00.0: cvl_0_0 00:23:11.078 23:36:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.078 23:36:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:11.079 23:36:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.079 23:36:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:11.079 23:36:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.079 23:36:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:11.079 Found net devices under 0000:84:00.1: cvl_0_1 00:23:11.079 23:36:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.079 23:36:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:11.079 23:36:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:11.079 23:36:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:11.079 23:36:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:11.079 23:36:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:11.079 23:36:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.079 23:36:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.079 23:36:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.079 23:36:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:11.079 23:36:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.079 23:36:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.079 23:36:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:11.079 23:36:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.079 23:36:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.079 23:36:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:11.079 23:36:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:11.079 23:36:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.079 23:36:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.079 23:36:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.079 23:36:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.079 23:36:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:11.079 23:36:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.338 23:36:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.338 23:36:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.338 23:36:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:11.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:23:11.338 00:23:11.338 --- 10.0.0.2 ping statistics --- 00:23:11.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.338 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:23:11.338 23:36:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:23:11.338 00:23:11.338 --- 10.0.0.1 ping statistics --- 00:23:11.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.338 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:23:11.338 23:36:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.338 23:36:32 -- nvmf/common.sh@410 -- # return 0 00:23:11.338 23:36:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:11.338 23:36:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.338 23:36:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:11.338 23:36:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:11.338 23:36:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.338 23:36:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:11.338 23:36:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:11.338 23:36:32 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:11.338 23:36:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:11.338 23:36:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:11.338 23:36:32 -- common/autotest_common.sh@10 -- # set +x 00:23:11.338 23:36:32 -- nvmf/common.sh@469 -- # nvmfpid=293278 00:23:11.338 23:36:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.338 23:36:32 -- nvmf/common.sh@470 -- # waitforlisten 293278 00:23:11.338 23:36:32 -- common/autotest_common.sh@819 -- # '[' -z 293278 ']' 00:23:11.338 23:36:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.338 23:36:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:11.338 23:36:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.338 23:36:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:11.338 23:36:32 -- common/autotest_common.sh@10 -- # set +x 00:23:11.338 [2024-07-11 23:36:32.188825] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:11.338 [2024-07-11 23:36:32.188935] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.338 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.338 [2024-07-11 23:36:32.274455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.596 [2024-07-11 23:36:32.383623] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:11.596 [2024-07-11 23:36:32.383793] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.596 [2024-07-11 23:36:32.383814] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.596 [2024-07-11 23:36:32.383828] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.596 [2024-07-11 23:36:32.383860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.596 23:36:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:11.596 23:36:32 -- common/autotest_common.sh@852 -- # return 0 00:23:11.596 23:36:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:11.596 23:36:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:11.596 23:36:32 -- common/autotest_common.sh@10 -- # set +x 00:23:11.855 23:36:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.855 23:36:32 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:11.855 23:36:32 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:11.855 23:36:32 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:11.855 23:36:32 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:11.855 23:36:32 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:11.855 23:36:32 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:11.855 23:36:32 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:11.855 23:36:32 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:12.114 [2024-07-11 23:36:32.895657] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.114 [2024-07-11 23:36:32.911631] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:12.114 [2024-07-11 23:36:32.911904] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.114 malloc0 00:23:12.114 23:36:32 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.114 23:36:32 -- fips/fips.sh@148 -- # bdevperf_pid=293429 00:23:12.114 23:36:32 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:12.114 23:36:32 -- fips/fips.sh@149 -- # waitforlisten 293429 /var/tmp/bdevperf.sock 00:23:12.114 23:36:32 -- common/autotest_common.sh@819 -- # '[' -z 293429 ']' 00:23:12.114 23:36:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.114 23:36:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:12.114 23:36:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.114 23:36:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:12.114 23:36:32 -- common/autotest_common.sh@10 -- # set +x 00:23:12.114 [2024-07-11 23:36:33.057474] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:12.114 [2024-07-11 23:36:33.057591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293429 ] 00:23:12.372 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.372 [2024-07-11 23:36:33.134089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.372 [2024-07-11 23:36:33.226217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.742 23:36:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:13.742 23:36:34 -- common/autotest_common.sh@852 -- # return 0 00:23:13.742 23:36:34 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:14.000 [2024-07-11 23:36:34.860826] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.000 TLSTESTn1 00:23:14.257 23:36:34 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:14.257 Running I/O for 10 seconds... 00:23:24.220 00:23:24.220 Latency(us) 00:23:24.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.221 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:24.221 Verification LBA range: start 0x0 length 0x2000 00:23:24.221 TLSTESTn1 : 10.03 1633.71 6.38 0.00 0.00 78232.58 10534.31 94371.84 00:23:24.221 =================================================================================================================== 00:23:24.221 Total : 1633.71 6.38 0.00 0.00 78232.58 10534.31 94371.84 00:23:24.221 0 00:23:24.221 23:36:45 -- fips/fips.sh@1 -- # cleanup 00:23:24.221 23:36:45 -- fips/fips.sh@15 -- # process_shm --id 0 00:23:24.221 23:36:45 -- common/autotest_common.sh@796 -- # type=--id 00:23:24.221 23:36:45 -- common/autotest_common.sh@797 -- # id=0 00:23:24.221 23:36:45 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:23:24.221 23:36:45 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:24.221 23:36:45 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:23:24.221 23:36:45 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:23:24.221 23:36:45 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:23:24.221 23:36:45 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:24.221 nvmf_trace.0 00:23:24.479 23:36:45 -- common/autotest_common.sh@811 -- # return 0 00:23:24.479 23:36:45 -- fips/fips.sh@16 -- # killprocess 293429 00:23:24.479 23:36:45 -- common/autotest_common.sh@926 -- # '[' -z 293429 ']' 00:23:24.479 23:36:45 -- common/autotest_common.sh@930 -- # kill -0 293429 00:23:24.479 23:36:45 -- common/autotest_common.sh@931 -- # uname 00:23:24.479 23:36:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:24.479 23:36:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 293429 00:23:24.479 23:36:45 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:24.479 23:36:45 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:24.479 23:36:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 293429' 00:23:24.479 killing process with pid 293429 00:23:24.479 23:36:45 -- common/autotest_common.sh@945 -- # kill 293429 00:23:24.479 Received shutdown signal, test time was about 10.000000 seconds 00:23:24.479 00:23:24.479 Latency(us) 00:23:24.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.479 =================================================================================================================== 00:23:24.479 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:24.479 23:36:45 -- common/autotest_common.sh@950 -- # wait 293429 00:23:24.737 23:36:45 -- fips/fips.sh@17 -- # nvmftestfini 00:23:24.737 23:36:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:24.737 23:36:45 -- nvmf/common.sh@116 -- # sync 00:23:24.737 23:36:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:24.737 23:36:45 -- nvmf/common.sh@119 -- # set +e 00:23:24.737 23:36:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:24.737 23:36:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:24.737 rmmod nvme_tcp 00:23:24.737 rmmod nvme_fabrics 00:23:24.737 rmmod nvme_keyring 00:23:24.737 23:36:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:24.737 23:36:45 -- nvmf/common.sh@123 -- # set -e 00:23:24.737 23:36:45 -- nvmf/common.sh@124 -- # return 0 00:23:24.737 23:36:45 -- nvmf/common.sh@477 -- # '[' -n 293278 ']' 00:23:24.737 23:36:45 -- nvmf/common.sh@478 -- # killprocess 293278 00:23:24.737 23:36:45 -- common/autotest_common.sh@926 -- # '[' -z 293278 ']' 00:23:24.737 23:36:45 -- common/autotest_common.sh@930 -- # kill -0 293278 00:23:24.737 23:36:45 -- common/autotest_common.sh@931 -- # uname 00:23:24.737 23:36:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:24.737 23:36:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 293278 00:23:24.737 23:36:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:24.737 23:36:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:24.738 23:36:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 293278' 00:23:24.738 killing process with pid 293278 00:23:24.738 23:36:45 -- common/autotest_common.sh@945 -- # kill 293278 00:23:24.738 23:36:45 -- common/autotest_common.sh@950 -- # wait 293278 00:23:24.997 23:36:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:24.997 23:36:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:24.997 23:36:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:24.997 23:36:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.997 23:36:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:24.997 23:36:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.997 23:36:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.997 23:36:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.533 23:36:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:27.533 23:36:47 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:27.533 00:23:27.533 real 0m18.921s 00:23:27.533 user 0m23.781s 00:23:27.533 sys 0m7.930s 00:23:27.533 23:36:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:27.533 23:36:47 -- common/autotest_common.sh@10 -- # set +x 00:23:27.533 ************************************ 00:23:27.533 END TEST nvmf_fips 00:23:27.533 ************************************ 00:23:27.533 23:36:47 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:23:27.533 23:36:47 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:27.533 23:36:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:27.533 23:36:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:27.533 23:36:47 -- common/autotest_common.sh@10 -- # set +x 00:23:27.533 ************************************ 00:23:27.533 START TEST nvmf_fuzz 00:23:27.533 ************************************ 00:23:27.533 23:36:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:27.533 * Looking for test storage... 00:23:27.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:27.533 23:36:47 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.533 23:36:47 -- nvmf/common.sh@7 -- # uname -s 00:23:27.533 23:36:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.533 23:36:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.533 23:36:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.533 23:36:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.533 23:36:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.533 23:36:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.533 23:36:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.533 23:36:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.533 23:36:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.533 23:36:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.533 23:36:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:27.533 23:36:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:27.533 23:36:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.533 23:36:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.533 23:36:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.533 23:36:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.533 23:36:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.533 23:36:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.533 23:36:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.533 23:36:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.533 23:36:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.533 23:36:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.533 23:36:47 -- paths/export.sh@5 -- # export PATH 00:23:27.533 23:36:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.533 23:36:47 -- nvmf/common.sh@46 -- # : 0 00:23:27.533 23:36:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:27.533 23:36:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:27.533 23:36:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:27.534 23:36:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.534 23:36:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.534 23:36:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:27.534 23:36:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:27.534 23:36:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:27.534 23:36:47 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:27.534 23:36:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:27.534 23:36:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.534 23:36:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:27.534 23:36:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:27.534 23:36:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:27.534 23:36:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.534 23:36:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.534 23:36:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.534 23:36:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:27.534 23:36:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:27.534 23:36:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:27.534 23:36:47 -- common/autotest_common.sh@10 -- # set +x 00:23:30.071 23:36:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:30.071 23:36:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:30.071 23:36:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:30.071 23:36:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:30.071 23:36:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:30.071 23:36:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:30.071 23:36:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:30.071 23:36:50 -- nvmf/common.sh@294 -- # net_devs=() 00:23:30.071 23:36:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:30.071 23:36:50 -- nvmf/common.sh@295 -- # e810=() 00:23:30.071 23:36:50 -- nvmf/common.sh@295 -- # local -ga e810 00:23:30.071 23:36:50 -- nvmf/common.sh@296 -- # x722=() 00:23:30.071 23:36:50 -- nvmf/common.sh@296 -- # local -ga x722 00:23:30.071 23:36:50 -- nvmf/common.sh@297 -- # mlx=() 00:23:30.071 23:36:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:30.071 23:36:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.071 23:36:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.071 23:36:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.071 23:36:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.071 23:36:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.071 23:36:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.071 23:36:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.071 23:36:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.071 23:36:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.071 23:36:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.071 23:36:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.071 23:36:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:30.071 23:36:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:30.071 23:36:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:30.071 23:36:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:30.071 23:36:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:30.071 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:30.071 23:36:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:30.071 23:36:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:30.071 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:30.071 23:36:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:30.071 23:36:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:30.071 23:36:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.071 23:36:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:30.071 23:36:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.071 23:36:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:30.071 Found net devices under 0000:84:00.0: cvl_0_0 00:23:30.071 23:36:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.071 23:36:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:30.071 23:36:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.071 23:36:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:30.071 23:36:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.071 23:36:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:30.071 Found net devices under 0000:84:00.1: cvl_0_1 00:23:30.071 23:36:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.071 23:36:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:30.071 23:36:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:30.071 23:36:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:30.071 23:36:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.071 23:36:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.071 23:36:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.071 23:36:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:30.071 23:36:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.071 23:36:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.071 23:36:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:30.071 23:36:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.071 23:36:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.071 23:36:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:30.071 23:36:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:30.071 23:36:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.071 23:36:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.071 23:36:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.071 23:36:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.071 23:36:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:30.071 23:36:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.071 23:36:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:30.071 23:36:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:30.071 23:36:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:30.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:23:30.071 00:23:30.071 --- 10.0.0.2 ping statistics --- 00:23:30.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.071 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:23:30.071 23:36:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:30.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:23:30.071 00:23:30.071 --- 10.0.0.1 ping statistics --- 00:23:30.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.071 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:23:30.071 23:36:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.071 23:36:50 -- nvmf/common.sh@410 -- # return 0 00:23:30.071 23:36:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:30.071 23:36:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.071 23:36:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:30.071 23:36:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.071 23:36:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:30.071 23:36:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:30.071 23:36:50 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=296891 00:23:30.071 23:36:50 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:30.071 23:36:50 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:30.071 23:36:50 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 296891 00:23:30.071 23:36:50 -- common/autotest_common.sh@819 -- # '[' -z 296891 ']' 00:23:30.071 23:36:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.071 23:36:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:30.071 23:36:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.071 23:36:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:30.071 23:36:50 -- common/autotest_common.sh@10 -- # set +x 00:23:30.329 23:36:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:30.329 23:36:51 -- common/autotest_common.sh@852 -- # return 0 00:23:30.329 23:36:51 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:30.329 23:36:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:30.329 23:36:51 -- common/autotest_common.sh@10 -- # set +x 00:23:30.588 23:36:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:30.588 23:36:51 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:30.588 23:36:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:30.588 23:36:51 -- common/autotest_common.sh@10 -- # set +x 00:23:30.588 Malloc0 00:23:30.588 23:36:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:30.588 23:36:51 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:30.588 23:36:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:30.588 23:36:51 -- common/autotest_common.sh@10 -- # set +x 00:23:30.588 23:36:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:30.588 23:36:51 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:30.588 23:36:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:30.588 23:36:51 -- common/autotest_common.sh@10 -- # set +x 00:23:30.588 23:36:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:30.588 23:36:51 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.588 23:36:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:30.588 23:36:51 -- common/autotest_common.sh@10 -- # set +x 00:23:30.588 23:36:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:30.588 23:36:51 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:30.588 23:36:51 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:02.655 Fuzzing completed. Shutting down the fuzz application 00:24:02.655 00:24:02.655 Dumping successful admin opcodes: 00:24:02.655 8, 9, 10, 24, 00:24:02.655 Dumping successful io opcodes: 00:24:02.655 0, 9, 00:24:02.655 NS: 0x200003aeff00 I/O qp, Total commands completed: 437276, total successful commands: 2553, random_seed: 2031685952 00:24:02.655 NS: 0x200003aeff00 admin qp, Total commands completed: 54080, total successful commands: 435, random_seed: 1285427904 00:24:02.655 23:37:22 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:02.914 Fuzzing completed. Shutting down the fuzz application 00:24:02.914 00:24:02.914 Dumping successful admin opcodes: 00:24:02.914 24, 00:24:02.914 Dumping successful io opcodes: 00:24:02.914 00:24:02.914 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 740563388 00:24:02.914 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 740722388 00:24:02.914 23:37:23 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.914 23:37:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.914 23:37:23 -- common/autotest_common.sh@10 -- # set +x 00:24:02.914 23:37:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.914 23:37:23 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:02.914 23:37:23 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:02.914 23:37:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:02.914 23:37:23 -- nvmf/common.sh@116 -- # sync 00:24:02.914 23:37:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:02.914 23:37:23 -- nvmf/common.sh@119 -- # set +e 00:24:02.914 23:37:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:02.914 23:37:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:03.173 rmmod nvme_tcp 00:24:03.173 rmmod nvme_fabrics 00:24:03.173 rmmod nvme_keyring 00:24:03.173 23:37:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:03.173 23:37:23 -- nvmf/common.sh@123 -- # set -e 00:24:03.173 23:37:23 -- nvmf/common.sh@124 -- # return 0 00:24:03.173 23:37:23 -- nvmf/common.sh@477 -- # '[' -n 296891 ']' 00:24:03.173 23:37:23 -- nvmf/common.sh@478 -- # killprocess 296891 00:24:03.173 23:37:23 -- common/autotest_common.sh@926 -- # '[' -z 296891 ']' 00:24:03.173 23:37:23 -- common/autotest_common.sh@930 -- # kill -0 296891 00:24:03.173 23:37:23 -- common/autotest_common.sh@931 -- # uname 00:24:03.173 23:37:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:03.173 23:37:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 296891 00:24:03.173 23:37:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:03.173 23:37:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:03.173 23:37:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 296891' 00:24:03.173 killing process with pid 296891 00:24:03.173 23:37:23 -- common/autotest_common.sh@945 -- # kill 296891 00:24:03.173 23:37:23 -- common/autotest_common.sh@950 -- # wait 296891 00:24:03.440 23:37:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:03.440 23:37:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:03.440 23:37:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:03.440 23:37:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.440 23:37:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:03.440 23:37:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.440 23:37:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.440 23:37:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.390 23:37:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:05.390 23:37:26 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:05.390 00:24:05.390 real 0m38.413s 00:24:05.390 user 0m51.613s 00:24:05.390 sys 0m16.082s 00:24:05.390 23:37:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:05.390 23:37:26 -- common/autotest_common.sh@10 -- # set +x 00:24:05.390 ************************************ 00:24:05.390 END TEST nvmf_fuzz 00:24:05.390 ************************************ 00:24:05.390 23:37:26 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:05.390 23:37:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:05.390 23:37:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:05.390 23:37:26 -- common/autotest_common.sh@10 -- # set +x 00:24:05.647 ************************************ 00:24:05.647 START TEST nvmf_multiconnection 00:24:05.647 ************************************ 00:24:05.647 23:37:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:05.647 * Looking for test storage... 00:24:05.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:05.647 23:37:26 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.647 23:37:26 -- nvmf/common.sh@7 -- # uname -s 00:24:05.647 23:37:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.647 23:37:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.647 23:37:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.647 23:37:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.647 23:37:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.647 23:37:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.647 23:37:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.647 23:37:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.647 23:37:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.647 23:37:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.647 23:37:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:05.647 23:37:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:05.647 23:37:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.647 23:37:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.647 23:37:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.647 23:37:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.647 23:37:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.647 23:37:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.647 23:37:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.647 23:37:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.647 23:37:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.647 23:37:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.647 23:37:26 -- paths/export.sh@5 -- # export PATH 00:24:05.647 23:37:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.647 23:37:26 -- nvmf/common.sh@46 -- # : 0 00:24:05.647 23:37:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:05.647 23:37:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:05.647 23:37:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:05.647 23:37:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.647 23:37:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.647 23:37:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:05.647 23:37:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:05.647 23:37:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:05.647 23:37:26 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:05.647 23:37:26 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:05.647 23:37:26 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:05.647 23:37:26 -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:05.647 23:37:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:05.647 23:37:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.647 23:37:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:05.647 23:37:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:05.647 23:37:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:05.648 23:37:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.648 23:37:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.648 23:37:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.648 23:37:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:05.648 23:37:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:05.648 23:37:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:05.648 23:37:26 -- common/autotest_common.sh@10 -- # set +x 00:24:08.178 23:37:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:08.178 23:37:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:08.178 23:37:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:08.178 23:37:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:08.178 23:37:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:08.178 23:37:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:08.178 23:37:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:08.178 23:37:29 -- nvmf/common.sh@294 -- # net_devs=() 00:24:08.178 23:37:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:08.178 23:37:29 -- nvmf/common.sh@295 -- # e810=() 00:24:08.178 23:37:29 -- nvmf/common.sh@295 -- # local -ga e810 00:24:08.178 23:37:29 -- nvmf/common.sh@296 -- # x722=() 00:24:08.178 23:37:29 -- nvmf/common.sh@296 -- # local -ga x722 00:24:08.178 23:37:29 -- nvmf/common.sh@297 -- # mlx=() 00:24:08.178 23:37:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:08.178 23:37:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.178 23:37:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.178 23:37:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.178 23:37:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.178 23:37:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.178 23:37:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.178 23:37:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.178 23:37:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.178 23:37:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.178 23:37:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.178 23:37:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.178 23:37:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:08.178 23:37:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:08.178 23:37:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:08.178 23:37:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:08.178 23:37:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:08.178 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:08.178 23:37:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:08.178 23:37:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:08.178 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:08.178 23:37:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:08.178 23:37:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:08.178 23:37:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.178 23:37:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:08.178 23:37:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.178 23:37:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:08.178 Found net devices under 0000:84:00.0: cvl_0_0 00:24:08.178 23:37:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.178 23:37:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:08.178 23:37:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.178 23:37:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:08.178 23:37:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.178 23:37:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:08.178 Found net devices under 0000:84:00.1: cvl_0_1 00:24:08.178 23:37:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.178 23:37:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:08.178 23:37:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:08.178 23:37:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:08.178 23:37:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:08.178 23:37:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.178 23:37:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.178 23:37:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.178 23:37:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:08.178 23:37:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.178 23:37:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.178 23:37:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:08.178 23:37:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.178 23:37:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.178 23:37:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:08.178 23:37:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:08.178 23:37:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.178 23:37:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.483 23:37:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.483 23:37:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.483 23:37:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:08.483 23:37:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.483 23:37:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.483 23:37:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.483 23:37:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:08.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:24:08.483 00:24:08.483 --- 10.0.0.2 ping statistics --- 00:24:08.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.483 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:24:08.483 23:37:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:24:08.483 00:24:08.483 --- 10.0.0.1 ping statistics --- 00:24:08.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.483 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:24:08.483 23:37:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.483 23:37:29 -- nvmf/common.sh@410 -- # return 0 00:24:08.483 23:37:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:08.483 23:37:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.483 23:37:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:08.483 23:37:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:08.483 23:37:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.483 23:37:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:08.483 23:37:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:08.483 23:37:29 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:08.483 23:37:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:08.483 23:37:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:08.483 23:37:29 -- common/autotest_common.sh@10 -- # set +x 00:24:08.483 23:37:29 -- nvmf/common.sh@469 -- # nvmfpid=302903 00:24:08.483 23:37:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:08.483 23:37:29 -- nvmf/common.sh@470 -- # waitforlisten 302903 00:24:08.483 23:37:29 -- common/autotest_common.sh@819 -- # '[' -z 302903 ']' 00:24:08.483 23:37:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.483 23:37:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:08.483 23:37:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.483 23:37:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:08.483 23:37:29 -- common/autotest_common.sh@10 -- # set +x 00:24:08.483 [2024-07-11 23:37:29.363894] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:08.483 [2024-07-11 23:37:29.364063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.742 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.742 [2024-07-11 23:37:29.472095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:08.742 [2024-07-11 23:37:29.572087] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:08.742 [2024-07-11 23:37:29.572250] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.742 [2024-07-11 23:37:29.572271] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.742 [2024-07-11 23:37:29.572286] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.742 [2024-07-11 23:37:29.572357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.742 [2024-07-11 23:37:29.572402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.742 [2024-07-11 23:37:29.572451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:08.742 [2024-07-11 23:37:29.572454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.674 23:37:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:09.674 23:37:30 -- common/autotest_common.sh@852 -- # return 0 00:24:09.674 23:37:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:09.674 23:37:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:09.674 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.674 23:37:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.674 23:37:30 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.674 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.674 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.674 [2024-07-11 23:37:30.534299] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.674 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.674 23:37:30 -- target/multiconnection.sh@21 -- # seq 1 11 00:24:09.674 23:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.674 23:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:09.674 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.674 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.674 Malloc1 00:24:09.674 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.674 23:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:09.674 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.674 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.674 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.674 23:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:09.674 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.674 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.674 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.674 23:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.674 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.674 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.674 [2024-07-11 23:37:30.593274] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.674 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.674 23:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.674 23:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:09.674 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.674 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.674 Malloc2 00:24:09.674 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.674 23:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:09.674 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.674 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.931 23:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 Malloc3 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.931 23:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 Malloc4 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.931 23:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 Malloc5 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.931 23:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 Malloc6 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.931 23:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 Malloc7 00:24:09.931 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.931 23:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:09.931 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.931 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.932 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.932 23:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:09.932 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.932 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:10.190 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.190 23:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:10.190 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 Malloc8 00:24:10.190 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:10.190 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:10.190 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:10.190 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.190 23:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:10.190 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 Malloc9 00:24:10.190 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:10.190 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:10.190 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:10.190 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 23:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.190 23:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:10.190 23:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 Malloc10 00:24:10.190 23:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:10.190 23:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:31 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 23:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:10.190 23:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:31 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 23:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:10.190 23:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:31 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 23:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.190 23:37:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:10.190 23:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:31 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 Malloc11 00:24:10.190 23:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:10.190 23:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:31 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 23:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:10.190 23:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:31 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 23:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:10.190 23:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.190 23:37:31 -- common/autotest_common.sh@10 -- # set +x 00:24:10.190 23:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.190 23:37:31 -- target/multiconnection.sh@28 -- # seq 1 11 00:24:10.190 23:37:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.190 23:37:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:11.124 23:37:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:11.124 23:37:31 -- common/autotest_common.sh@1177 -- # local i=0 00:24:11.124 23:37:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:11.124 23:37:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:11.124 23:37:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:13.025 23:37:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:13.025 23:37:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:13.025 23:37:33 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:24:13.025 23:37:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:13.025 23:37:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:13.025 23:37:33 -- common/autotest_common.sh@1187 -- # return 0 00:24:13.025 23:37:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.025 23:37:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:13.593 23:37:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:13.593 23:37:34 -- common/autotest_common.sh@1177 -- # local i=0 00:24:13.593 23:37:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:13.593 23:37:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:13.593 23:37:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:16.127 23:37:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:16.127 23:37:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:16.127 23:37:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:24:16.127 23:37:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:16.127 23:37:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:16.127 23:37:36 -- common/autotest_common.sh@1187 -- # return 0 00:24:16.127 23:37:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.127 23:37:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:16.385 23:37:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:16.385 23:37:37 -- common/autotest_common.sh@1177 -- # local i=0 00:24:16.385 23:37:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:16.385 23:37:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:16.386 23:37:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:18.302 23:37:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:18.302 23:37:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:18.302 23:37:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:24:18.560 23:37:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:18.560 23:37:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:18.560 23:37:39 -- common/autotest_common.sh@1187 -- # return 0 00:24:18.560 23:37:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.560 23:37:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:19.127 23:37:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:19.127 23:37:39 -- common/autotest_common.sh@1177 -- # local i=0 00:24:19.127 23:37:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:19.127 23:37:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:19.127 23:37:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:21.657 23:37:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:21.657 23:37:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:21.657 23:37:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:24:21.657 23:37:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:21.657 23:37:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:21.657 23:37:42 -- common/autotest_common.sh@1187 -- # return 0 00:24:21.657 23:37:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.657 23:37:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:21.915 23:37:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:21.915 23:37:42 -- common/autotest_common.sh@1177 -- # local i=0 00:24:21.915 23:37:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.915 23:37:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:21.915 23:37:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:24.447 23:37:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:24.447 23:37:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:24.447 23:37:44 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:24:24.447 23:37:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:24.447 23:37:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:24.447 23:37:44 -- common/autotest_common.sh@1187 -- # return 0 00:24:24.447 23:37:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:24.447 23:37:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:25.013 23:37:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:25.013 23:37:45 -- common/autotest_common.sh@1177 -- # local i=0 00:24:25.013 23:37:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:25.013 23:37:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:25.013 23:37:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:26.957 23:37:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:26.957 23:37:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:26.957 23:37:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:24:26.957 23:37:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:26.957 23:37:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:26.957 23:37:47 -- common/autotest_common.sh@1187 -- # return 0 00:24:26.957 23:37:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:26.957 23:37:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:27.542 23:37:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:27.542 23:37:48 -- common/autotest_common.sh@1177 -- # local i=0 00:24:27.542 23:37:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:27.542 23:37:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:27.542 23:37:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:30.072 23:37:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:30.072 23:37:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:30.072 23:37:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:24:30.072 23:37:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:30.072 23:37:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:30.072 23:37:50 -- common/autotest_common.sh@1187 -- # return 0 00:24:30.072 23:37:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.072 23:37:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:30.338 23:37:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:30.338 23:37:51 -- common/autotest_common.sh@1177 -- # local i=0 00:24:30.338 23:37:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:30.338 23:37:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:30.338 23:37:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:32.881 23:37:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:32.881 23:37:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:32.881 23:37:53 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:24:32.881 23:37:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:32.881 23:37:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:32.881 23:37:53 -- common/autotest_common.sh@1187 -- # return 0 00:24:32.881 23:37:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.881 23:37:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:33.139 23:37:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:33.139 23:37:54 -- common/autotest_common.sh@1177 -- # local i=0 00:24:33.139 23:37:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:33.139 23:37:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:33.139 23:37:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:35.668 23:37:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:35.668 23:37:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:35.668 23:37:56 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:24:35.668 23:37:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:35.668 23:37:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:35.668 23:37:56 -- common/autotest_common.sh@1187 -- # return 0 00:24:35.668 23:37:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.668 23:37:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:36.233 23:37:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:36.233 23:37:57 -- common/autotest_common.sh@1177 -- # local i=0 00:24:36.233 23:37:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:36.233 23:37:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:36.233 23:37:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:38.132 23:37:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:38.132 23:37:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:38.132 23:37:59 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:24:38.132 23:37:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:38.132 23:37:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:38.132 23:37:59 -- common/autotest_common.sh@1187 -- # return 0 00:24:38.132 23:37:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.132 23:37:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:39.068 23:37:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:39.068 23:37:59 -- common/autotest_common.sh@1177 -- # local i=0 00:24:39.068 23:37:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:39.068 23:37:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:39.068 23:37:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:40.968 23:38:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:40.968 23:38:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:40.968 23:38:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:24:40.968 23:38:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:40.968 23:38:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:40.968 23:38:01 -- common/autotest_common.sh@1187 -- # return 0 00:24:40.968 23:38:01 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:40.968 [global] 00:24:40.968 thread=1 00:24:40.968 invalidate=1 00:24:40.968 rw=read 00:24:40.968 time_based=1 00:24:40.968 runtime=10 00:24:40.968 ioengine=libaio 00:24:40.968 direct=1 00:24:40.968 bs=262144 00:24:40.968 iodepth=64 00:24:40.968 norandommap=1 00:24:40.968 numjobs=1 00:24:40.968 00:24:40.968 [job0] 00:24:40.968 filename=/dev/nvme0n1 00:24:40.968 [job1] 00:24:40.968 filename=/dev/nvme10n1 00:24:40.968 [job2] 00:24:40.968 filename=/dev/nvme1n1 00:24:40.968 [job3] 00:24:40.968 filename=/dev/nvme2n1 00:24:40.968 [job4] 00:24:40.968 filename=/dev/nvme3n1 00:24:40.968 [job5] 00:24:40.968 filename=/dev/nvme4n1 00:24:40.968 [job6] 00:24:40.968 filename=/dev/nvme5n1 00:24:41.226 [job7] 00:24:41.226 filename=/dev/nvme6n1 00:24:41.226 [job8] 00:24:41.226 filename=/dev/nvme7n1 00:24:41.226 [job9] 00:24:41.226 filename=/dev/nvme8n1 00:24:41.226 [job10] 00:24:41.226 filename=/dev/nvme9n1 00:24:41.226 Could not set queue depth (nvme0n1) 00:24:41.226 Could not set queue depth (nvme10n1) 00:24:41.226 Could not set queue depth (nvme1n1) 00:24:41.226 Could not set queue depth (nvme2n1) 00:24:41.226 Could not set queue depth (nvme3n1) 00:24:41.226 Could not set queue depth (nvme4n1) 00:24:41.226 Could not set queue depth (nvme5n1) 00:24:41.226 Could not set queue depth (nvme6n1) 00:24:41.226 Could not set queue depth (nvme7n1) 00:24:41.226 Could not set queue depth (nvme8n1) 00:24:41.226 Could not set queue depth (nvme9n1) 00:24:41.484 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.484 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.484 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.484 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.484 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.484 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.484 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.484 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.484 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.484 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.484 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.484 fio-3.35 00:24:41.484 Starting 11 threads 00:24:53.682 00:24:53.682 job0: (groupid=0, jobs=1): err= 0: pid=307269: Thu Jul 11 23:38:12 2024 00:24:53.682 read: IOPS=676, BW=169MiB/s (177MB/s)(1711MiB/10118msec) 00:24:53.682 slat (usec): min=10, max=138471, avg=885.34, stdev=4288.70 00:24:53.682 clat (msec): min=3, max=269, avg=93.58, stdev=57.15 00:24:53.682 lat (msec): min=3, max=314, avg=94.46, stdev=57.73 00:24:53.682 clat percentiles (msec): 00:24:53.682 | 1.00th=[ 10], 5.00th=[ 20], 10.00th=[ 26], 20.00th=[ 40], 00:24:53.682 | 30.00th=[ 56], 40.00th=[ 68], 50.00th=[ 83], 60.00th=[ 99], 00:24:53.682 | 70.00th=[ 118], 80.00th=[ 146], 90.00th=[ 176], 95.00th=[ 205], 00:24:53.682 | 99.00th=[ 241], 99.50th=[ 253], 99.90th=[ 266], 99.95th=[ 268], 00:24:53.682 | 99.99th=[ 271] 00:24:53.682 bw ( KiB/s): min=69493, max=394752, per=8.80%, avg=173514.70, stdev=73637.24, samples=20 00:24:53.682 iops : min= 271, max= 1542, avg=677.75, stdev=287.68, samples=20 00:24:53.682 lat (msec) : 4=0.03%, 10=1.01%, 20=4.12%, 50=20.59%, 100=35.34% 00:24:53.682 lat (msec) : 250=38.39%, 500=0.53% 00:24:53.682 cpu : usr=0.35%, sys=1.87%, ctx=1939, majf=0, minf=4097 00:24:53.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:53.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:53.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:53.682 issued rwts: total=6843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:53.682 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:53.682 job1: (groupid=0, jobs=1): err= 0: pid=307270: Thu Jul 11 23:38:12 2024 00:24:53.682 read: IOPS=908, BW=227MiB/s (238MB/s)(2299MiB/10119msec) 00:24:53.682 slat (usec): min=10, max=87463, avg=632.53, stdev=3197.37 00:24:53.682 clat (usec): min=1120, max=268651, avg=69689.00, stdev=52578.82 00:24:53.682 lat (usec): min=1164, max=268675, avg=70321.52, stdev=52963.66 00:24:53.682 clat percentiles (msec): 00:24:53.682 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 26], 20.00th=[ 34], 00:24:53.682 | 30.00th=[ 36], 40.00th=[ 41], 50.00th=[ 52], 60.00th=[ 63], 00:24:53.682 | 70.00th=[ 78], 80.00th=[ 105], 90.00th=[ 153], 95.00th=[ 190], 00:24:53.682 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 257], 99.95th=[ 257], 00:24:53.682 | 99.99th=[ 271] 00:24:53.682 bw ( KiB/s): min=81920, max=455168, per=11.86%, avg=233715.45, stdev=95856.29, samples=20 00:24:53.682 iops : min= 320, max= 1778, avg=912.85, stdev=374.44, samples=20 00:24:53.682 lat (msec) : 2=0.15%, 4=0.89%, 10=1.95%, 20=4.43%, 50=41.47% 00:24:53.682 lat (msec) : 100=30.03%, 250=20.63%, 500=0.45% 00:24:53.682 cpu : usr=0.39%, sys=2.42%, ctx=2587, majf=0, minf=4097 00:24:53.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:53.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:53.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:53.682 issued rwts: total=9196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:53.682 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:53.682 job2: (groupid=0, jobs=1): err= 0: pid=307278: Thu Jul 11 23:38:12 2024 00:24:53.682 read: IOPS=625, BW=156MiB/s (164MB/s)(1576MiB/10083msec) 00:24:53.682 slat (usec): min=10, max=168552, avg=828.04, stdev=5330.66 00:24:53.682 clat (usec): min=1776, max=339943, avg=101393.99, stdev=62448.00 00:24:53.682 lat (usec): min=1798, max=382000, avg=102222.03, stdev=63181.25 00:24:53.682 clat percentiles (msec): 00:24:53.682 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 29], 20.00th=[ 49], 00:24:53.682 | 30.00th=[ 65], 40.00th=[ 80], 50.00th=[ 90], 60.00th=[ 103], 00:24:53.682 | 70.00th=[ 123], 80.00th=[ 153], 90.00th=[ 190], 95.00th=[ 220], 00:24:53.682 | 99.00th=[ 313], 99.50th=[ 317], 99.90th=[ 334], 99.95th=[ 334], 00:24:53.682 | 99.99th=[ 342] 00:24:53.682 bw ( KiB/s): min=67584, max=248320, per=8.10%, avg=159710.05, stdev=46082.47, samples=20 00:24:53.682 iops : min= 264, max= 970, avg=623.75, stdev=180.03, samples=20 00:24:53.682 lat (msec) : 2=0.02%, 4=0.29%, 10=1.43%, 20=3.20%, 50=15.75% 00:24:53.682 lat (msec) : 100=37.80%, 250=38.80%, 500=2.71% 00:24:53.682 cpu : usr=0.38%, sys=1.68%, ctx=2020, majf=0, minf=4097 00:24:53.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:53.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:53.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:53.682 issued rwts: total=6304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:53.682 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:53.682 job3: (groupid=0, jobs=1): err= 0: pid=307279: Thu Jul 11 23:38:12 2024 00:24:53.682 read: IOPS=731, BW=183MiB/s (192MB/s)(1852MiB/10123msec) 00:24:53.682 slat (usec): min=10, max=127423, avg=582.39, stdev=4178.22 00:24:53.682 clat (usec): min=1102, max=305422, avg=86774.52, stdev=59068.15 00:24:53.682 lat (usec): min=1123, max=343223, avg=87356.91, stdev=59455.15 00:24:53.682 clat percentiles (msec): 00:24:53.682 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 24], 20.00th=[ 37], 00:24:53.682 | 30.00th=[ 48], 40.00th=[ 60], 50.00th=[ 71], 60.00th=[ 85], 00:24:53.682 | 70.00th=[ 105], 80.00th=[ 140], 90.00th=[ 171], 95.00th=[ 211], 00:24:53.682 | 99.00th=[ 249], 99.50th=[ 257], 99.90th=[ 268], 99.95th=[ 292], 00:24:53.682 | 99.99th=[ 305] 00:24:53.682 bw ( KiB/s): min=73728, max=419840, per=9.53%, avg=187933.40, stdev=76012.41, samples=20 00:24:53.682 iops : min= 288, max= 1640, avg=734.00, stdev=296.91, samples=20 00:24:53.682 lat (msec) : 2=0.01%, 4=0.26%, 10=1.78%, 20=5.76%, 50=24.23% 00:24:53.682 lat (msec) : 100=35.73%, 250=31.25%, 500=0.97% 00:24:53.682 cpu : usr=0.34%, sys=1.93%, ctx=2119, majf=0, minf=4097 00:24:53.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:53.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:53.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:53.682 issued rwts: total=7408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:53.682 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:53.682 job4: (groupid=0, jobs=1): err= 0: pid=307280: Thu Jul 11 23:38:12 2024 00:24:53.682 read: IOPS=740, BW=185MiB/s (194MB/s)(1875MiB/10124msec) 00:24:53.682 slat (usec): min=10, max=145554, avg=766.92, stdev=4712.14 00:24:53.682 clat (usec): min=1709, max=373511, avg=85567.63, stdev=53242.99 00:24:53.682 lat (usec): min=1744, max=373531, avg=86334.54, stdev=53765.50 00:24:53.682 clat percentiles (msec): 00:24:53.682 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 25], 20.00th=[ 39], 00:24:53.682 | 30.00th=[ 53], 40.00th=[ 68], 50.00th=[ 80], 60.00th=[ 91], 00:24:53.682 | 70.00th=[ 104], 80.00th=[ 120], 90.00th=[ 157], 95.00th=[ 201], 00:24:53.682 | 99.00th=[ 249], 99.50th=[ 259], 99.90th=[ 292], 99.95th=[ 305], 00:24:53.682 | 99.99th=[ 376] 00:24:53.682 bw ( KiB/s): min=108032, max=254976, per=9.65%, avg=190208.20, stdev=44974.90, samples=20 00:24:53.682 iops : min= 422, max= 996, avg=742.85, stdev=175.75, samples=20 00:24:53.682 lat (msec) : 2=0.03%, 4=0.11%, 10=1.59%, 20=5.43%, 50=20.74% 00:24:53.682 lat (msec) : 100=39.69%, 250=31.52%, 500=0.91% 00:24:53.682 cpu : usr=0.35%, sys=1.93%, ctx=2117, majf=0, minf=4097 00:24:53.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:53.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:53.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:53.682 issued rwts: total=7498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:53.682 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:53.682 job5: (groupid=0, jobs=1): err= 0: pid=307281: Thu Jul 11 23:38:12 2024 00:24:53.682 read: IOPS=624, BW=156MiB/s (164MB/s)(1581MiB/10129msec) 00:24:53.682 slat (usec): min=9, max=181346, avg=983.14, stdev=5063.64 00:24:53.682 clat (usec): min=1116, max=305828, avg=101421.93, stdev=58823.91 00:24:53.682 lat (usec): min=1140, max=305844, avg=102405.07, stdev=59472.65 00:24:53.682 clat percentiles (msec): 00:24:53.682 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 21], 20.00th=[ 45], 00:24:53.682 | 30.00th=[ 69], 40.00th=[ 88], 50.00th=[ 101], 60.00th=[ 113], 00:24:53.682 | 70.00th=[ 131], 80.00th=[ 146], 90.00th=[ 182], 95.00th=[ 207], 00:24:53.682 | 99.00th=[ 264], 99.50th=[ 288], 99.90th=[ 300], 99.95th=[ 305], 00:24:53.682 | 99.99th=[ 305] 00:24:53.683 bw ( KiB/s): min=77312, max=227385, per=8.13%, avg=160175.35, stdev=46334.15, samples=20 00:24:53.683 iops : min= 302, max= 888, avg=625.60, stdev=180.96, samples=20 00:24:53.683 lat (msec) : 2=0.59%, 4=0.70%, 10=2.88%, 20=5.58%, 50=12.78% 00:24:53.683 lat (msec) : 100=26.70%, 250=49.49%, 500=1.30% 00:24:53.683 cpu : usr=0.34%, sys=1.58%, ctx=1917, majf=0, minf=4097 00:24:53.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:53.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:53.683 issued rwts: total=6323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:53.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:53.683 job6: (groupid=0, jobs=1): err= 0: pid=307282: Thu Jul 11 23:38:12 2024 00:24:53.683 read: IOPS=735, BW=184MiB/s (193MB/s)(1854MiB/10084msec) 00:24:53.683 slat (usec): min=10, max=193504, avg=654.26, stdev=4867.36 00:24:53.683 clat (usec): min=1582, max=503507, avg=86301.56, stdev=64663.71 00:24:53.683 lat (usec): min=1603, max=503532, avg=86955.83, stdev=65327.36 00:24:53.683 clat percentiles (msec): 00:24:53.683 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 18], 20.00th=[ 33], 00:24:53.683 | 30.00th=[ 46], 40.00th=[ 59], 50.00th=[ 78], 60.00th=[ 93], 00:24:53.683 | 70.00th=[ 108], 80.00th=[ 131], 90.00th=[ 167], 95.00th=[ 194], 00:24:53.683 | 99.00th=[ 249], 99.50th=[ 451], 99.90th=[ 498], 99.95th=[ 498], 00:24:53.683 | 99.99th=[ 506] 00:24:53.683 bw ( KiB/s): min=66560, max=326514, per=9.54%, avg=188097.45, stdev=81138.43, samples=20 00:24:53.683 iops : min= 260, max= 1275, avg=734.65, stdev=316.90, samples=20 00:24:53.683 lat (msec) : 2=0.07%, 4=0.34%, 10=4.55%, 20=6.76%, 50=22.43% 00:24:53.683 lat (msec) : 100=30.37%, 250=34.57%, 500=0.90%, 750=0.01% 00:24:53.683 cpu : usr=0.27%, sys=2.11%, ctx=2261, majf=0, minf=4097 00:24:53.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:53.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:53.683 issued rwts: total=7414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:53.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:53.683 job7: (groupid=0, jobs=1): err= 0: pid=307283: Thu Jul 11 23:38:12 2024 00:24:53.683 read: IOPS=651, BW=163MiB/s (171MB/s)(1650MiB/10131msec) 00:24:53.683 slat (usec): min=10, max=160576, avg=843.89, stdev=4805.90 00:24:53.683 clat (usec): min=956, max=288851, avg=97267.77, stdev=63808.15 00:24:53.683 lat (usec): min=975, max=288876, avg=98111.66, stdev=64334.27 00:24:53.683 clat percentiles (usec): 00:24:53.683 | 1.00th=[ 1500], 5.00th=[ 12911], 10.00th=[ 21365], 20.00th=[ 36439], 00:24:53.683 | 30.00th=[ 52691], 40.00th=[ 65799], 50.00th=[ 82314], 60.00th=[107480], 00:24:53.683 | 70.00th=[135267], 80.00th=[154141], 90.00th=[191890], 95.00th=[212861], 00:24:53.683 | 99.00th=[256902], 99.50th=[261096], 99.90th=[274727], 99.95th=[287310], 00:24:53.683 | 99.99th=[287310] 00:24:53.683 bw ( KiB/s): min=82432, max=311296, per=8.49%, avg=167260.20, stdev=63768.34, samples=20 00:24:53.683 iops : min= 322, max= 1216, avg=653.30, stdev=249.11, samples=20 00:24:53.683 lat (usec) : 1000=0.06% 00:24:53.683 lat (msec) : 2=0.97%, 4=0.18%, 10=2.14%, 20=5.52%, 50=18.90% 00:24:53.683 lat (msec) : 100=30.11%, 250=40.49%, 500=1.64% 00:24:53.683 cpu : usr=0.40%, sys=1.68%, ctx=2001, majf=0, minf=4097 00:24:53.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:53.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:53.683 issued rwts: total=6599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:53.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:53.683 job8: (groupid=0, jobs=1): err= 0: pid=307284: Thu Jul 11 23:38:12 2024 00:24:53.683 read: IOPS=681, BW=170MiB/s (179MB/s)(1717MiB/10086msec) 00:24:53.683 slat (usec): min=10, max=107088, avg=779.44, stdev=3567.23 00:24:53.683 clat (usec): min=1095, max=331878, avg=93105.03, stdev=54069.18 00:24:53.683 lat (usec): min=1120, max=331898, avg=93884.46, stdev=54482.43 00:24:53.683 clat percentiles (msec): 00:24:53.683 | 1.00th=[ 9], 5.00th=[ 21], 10.00th=[ 31], 20.00th=[ 54], 00:24:53.683 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 81], 60.00th=[ 94], 00:24:53.683 | 70.00th=[ 107], 80.00th=[ 134], 90.00th=[ 174], 95.00th=[ 203], 00:24:53.683 | 99.00th=[ 262], 99.50th=[ 284], 99.90th=[ 326], 99.95th=[ 326], 00:24:53.683 | 99.99th=[ 334] 00:24:53.683 bw ( KiB/s): min=79872, max=308736, per=8.83%, avg=174140.65, stdev=59156.27, samples=20 00:24:53.683 iops : min= 312, max= 1206, avg=680.15, stdev=231.10, samples=20 00:24:53.683 lat (msec) : 2=0.07%, 4=0.17%, 10=1.44%, 20=2.77%, 50=12.71% 00:24:53.683 lat (msec) : 100=47.14%, 250=34.44%, 500=1.25% 00:24:53.683 cpu : usr=0.25%, sys=1.91%, ctx=1992, majf=0, minf=4097 00:24:53.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:53.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:53.683 issued rwts: total=6869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:53.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:53.683 job9: (groupid=0, jobs=1): err= 0: pid=307285: Thu Jul 11 23:38:12 2024 00:24:53.683 read: IOPS=676, BW=169MiB/s (177MB/s)(1708MiB/10089msec) 00:24:53.683 slat (usec): min=10, max=167862, avg=864.22, stdev=4755.78 00:24:53.683 clat (msec): min=2, max=372, avg=93.58, stdev=52.13 00:24:53.683 lat (msec): min=3, max=372, avg=94.45, stdev=52.68 00:24:53.683 clat percentiles (msec): 00:24:53.683 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 35], 20.00th=[ 48], 00:24:53.683 | 30.00th=[ 63], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 95], 00:24:53.683 | 70.00th=[ 111], 80.00th=[ 140], 90.00th=[ 171], 95.00th=[ 199], 00:24:53.683 | 99.00th=[ 232], 99.50th=[ 239], 99.90th=[ 247], 99.95th=[ 253], 00:24:53.683 | 99.99th=[ 372] 00:24:53.683 bw ( KiB/s): min=74752, max=297900, per=8.78%, avg=173151.20, stdev=57994.92, samples=20 00:24:53.683 iops : min= 292, max= 1163, avg=676.25, stdev=226.41, samples=20 00:24:53.683 lat (msec) : 4=0.16%, 10=1.13%, 20=3.32%, 50=17.26%, 100=41.61% 00:24:53.683 lat (msec) : 250=36.46%, 500=0.06% 00:24:53.683 cpu : usr=0.48%, sys=1.66%, ctx=1917, majf=0, minf=4097 00:24:53.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:53.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:53.683 issued rwts: total=6830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:53.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:53.683 job10: (groupid=0, jobs=1): err= 0: pid=307292: Thu Jul 11 23:38:12 2024 00:24:53.683 read: IOPS=670, BW=168MiB/s (176MB/s)(1681MiB/10027msec) 00:24:53.683 slat (usec): min=10, max=151227, avg=962.63, stdev=5020.74 00:24:53.683 clat (usec): min=1170, max=360614, avg=94351.89, stdev=60677.79 00:24:53.683 lat (usec): min=1206, max=360632, avg=95314.53, stdev=61362.92 00:24:53.683 clat percentiles (msec): 00:24:53.683 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 23], 20.00th=[ 44], 00:24:53.683 | 30.00th=[ 59], 40.00th=[ 69], 50.00th=[ 84], 60.00th=[ 101], 00:24:53.683 | 70.00th=[ 120], 80.00th=[ 138], 90.00th=[ 186], 95.00th=[ 215], 00:24:53.683 | 99.00th=[ 279], 99.50th=[ 296], 99.90th=[ 300], 99.95th=[ 300], 00:24:53.683 | 99.99th=[ 359] 00:24:53.683 bw ( KiB/s): min=74602, max=283081, per=8.65%, avg=170487.00, stdev=55949.39, samples=20 00:24:53.683 iops : min= 291, max= 1105, avg=665.85, stdev=218.54, samples=20 00:24:53.683 lat (msec) : 2=0.10%, 4=0.67%, 10=3.46%, 20=4.70%, 50=14.96% 00:24:53.683 lat (msec) : 100=35.97%, 250=38.66%, 500=1.47% 00:24:53.683 cpu : usr=0.25%, sys=1.98%, ctx=1868, majf=0, minf=3721 00:24:53.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:53.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:53.683 issued rwts: total=6725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:53.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:53.683 00:24:53.683 Run status group 0 (all jobs): 00:24:53.683 READ: bw=1925MiB/s (2019MB/s), 156MiB/s-227MiB/s (164MB/s-238MB/s), io=19.0GiB (20.4GB), run=10027-10131msec 00:24:53.683 00:24:53.683 Disk stats (read/write): 00:24:53.683 nvme0n1: ios=13468/0, merge=0/0, ticks=1229970/0, in_queue=1229970, util=96.54% 00:24:53.683 nvme10n1: ios=17949/0, merge=0/0, ticks=1231748/0, in_queue=1231748, util=96.88% 00:24:53.683 nvme1n1: ios=12339/0, merge=0/0, ticks=1233967/0, in_queue=1233967, util=97.20% 00:24:53.683 nvme2n1: ios=14564/0, merge=0/0, ticks=1236662/0, in_queue=1236662, util=97.40% 00:24:53.683 nvme3n1: ios=14723/0, merge=0/0, ticks=1233173/0, in_queue=1233173, util=97.49% 00:24:53.683 nvme4n1: ios=12645/0, merge=0/0, ticks=1262200/0, in_queue=1262200, util=97.98% 00:24:53.683 nvme5n1: ios=14496/0, merge=0/0, ticks=1235137/0, in_queue=1235137, util=98.11% 00:24:53.683 nvme6n1: ios=13136/0, merge=0/0, ticks=1255356/0, in_queue=1255356, util=98.30% 00:24:53.683 nvme7n1: ios=13465/0, merge=0/0, ticks=1230462/0, in_queue=1230462, util=98.81% 00:24:53.683 nvme8n1: ios=13378/0, merge=0/0, ticks=1229477/0, in_queue=1229477, util=99.05% 00:24:53.683 nvme9n1: ios=12964/0, merge=0/0, ticks=1230420/0, in_queue=1230420, util=99.22% 00:24:53.683 23:38:12 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:53.683 [global] 00:24:53.683 thread=1 00:24:53.683 invalidate=1 00:24:53.683 rw=randwrite 00:24:53.683 time_based=1 00:24:53.683 runtime=10 00:24:53.683 ioengine=libaio 00:24:53.683 direct=1 00:24:53.683 bs=262144 00:24:53.683 iodepth=64 00:24:53.683 norandommap=1 00:24:53.683 numjobs=1 00:24:53.683 00:24:53.683 [job0] 00:24:53.683 filename=/dev/nvme0n1 00:24:53.683 [job1] 00:24:53.683 filename=/dev/nvme10n1 00:24:53.683 [job2] 00:24:53.683 filename=/dev/nvme1n1 00:24:53.683 [job3] 00:24:53.683 filename=/dev/nvme2n1 00:24:53.683 [job4] 00:24:53.683 filename=/dev/nvme3n1 00:24:53.683 [job5] 00:24:53.683 filename=/dev/nvme4n1 00:24:53.683 [job6] 00:24:53.683 filename=/dev/nvme5n1 00:24:53.683 [job7] 00:24:53.683 filename=/dev/nvme6n1 00:24:53.683 [job8] 00:24:53.683 filename=/dev/nvme7n1 00:24:53.683 [job9] 00:24:53.683 filename=/dev/nvme8n1 00:24:53.683 [job10] 00:24:53.683 filename=/dev/nvme9n1 00:24:53.683 Could not set queue depth (nvme0n1) 00:24:53.683 Could not set queue depth (nvme10n1) 00:24:53.683 Could not set queue depth (nvme1n1) 00:24:53.683 Could not set queue depth (nvme2n1) 00:24:53.683 Could not set queue depth (nvme3n1) 00:24:53.683 Could not set queue depth (nvme4n1) 00:24:53.683 Could not set queue depth (nvme5n1) 00:24:53.683 Could not set queue depth (nvme6n1) 00:24:53.683 Could not set queue depth (nvme7n1) 00:24:53.683 Could not set queue depth (nvme8n1) 00:24:53.683 Could not set queue depth (nvme9n1) 00:24:53.683 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:53.683 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:53.683 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:53.683 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:53.683 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:53.683 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:53.683 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:53.683 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:53.683 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:53.683 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:53.683 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:53.683 fio-3.35 00:24:53.683 Starting 11 threads 00:25:03.701 00:25:03.701 job0: (groupid=0, jobs=1): err= 0: pid=308478: Thu Jul 11 23:38:23 2024 00:25:03.701 write: IOPS=539, BW=135MiB/s (141MB/s)(1358MiB/10074msec); 0 zone resets 00:25:03.701 slat (usec): min=26, max=147065, avg=1362.99, stdev=4723.96 00:25:03.701 clat (usec): min=1655, max=395905, avg=117130.64, stdev=73054.53 00:25:03.701 lat (usec): min=1723, max=398151, avg=118493.63, stdev=73894.83 00:25:03.701 clat percentiles (msec): 00:25:03.701 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 57], 00:25:03.701 | 30.00th=[ 78], 40.00th=[ 91], 50.00th=[ 104], 60.00th=[ 121], 00:25:03.701 | 70.00th=[ 138], 80.00th=[ 161], 90.00th=[ 220], 95.00th=[ 266], 00:25:03.701 | 99.00th=[ 372], 99.50th=[ 380], 99.90th=[ 388], 99.95th=[ 393], 00:25:03.701 | 99.99th=[ 397] 00:25:03.701 bw ( KiB/s): min=63488, max=233984, per=9.71%, avg=137477.75, stdev=47142.44, samples=20 00:25:03.701 iops : min= 248, max= 914, avg=537.00, stdev=184.16, samples=20 00:25:03.701 lat (msec) : 2=0.02%, 4=0.11%, 10=1.88%, 20=2.21%, 50=12.66% 00:25:03.701 lat (msec) : 100=30.68%, 250=46.29%, 500=6.15% 00:25:03.701 cpu : usr=2.17%, sys=1.36%, ctx=2999, majf=0, minf=1 00:25:03.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:03.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.701 issued rwts: total=0,5433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.701 job1: (groupid=0, jobs=1): err= 0: pid=308479: Thu Jul 11 23:38:23 2024 00:25:03.701 write: IOPS=517, BW=129MiB/s (136MB/s)(1315MiB/10165msec); 0 zone resets 00:25:03.701 slat (usec): min=27, max=107807, avg=853.94, stdev=3176.71 00:25:03.701 clat (usec): min=1460, max=397092, avg=122695.67, stdev=66751.94 00:25:03.701 lat (usec): min=1517, max=397134, avg=123549.61, stdev=67194.92 00:25:03.701 clat percentiles (msec): 00:25:03.701 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 32], 20.00th=[ 58], 00:25:03.701 | 30.00th=[ 79], 40.00th=[ 107], 50.00th=[ 124], 60.00th=[ 140], 00:25:03.701 | 70.00th=[ 163], 80.00th=[ 182], 90.00th=[ 207], 95.00th=[ 234], 00:25:03.701 | 99.00th=[ 271], 99.50th=[ 326], 99.90th=[ 372], 99.95th=[ 372], 00:25:03.701 | 99.99th=[ 397] 00:25:03.701 bw ( KiB/s): min=83968, max=193536, per=9.39%, avg=133031.10, stdev=34100.47, samples=20 00:25:03.701 iops : min= 328, max= 756, avg=519.65, stdev=133.20, samples=20 00:25:03.701 lat (msec) : 2=0.19%, 4=0.42%, 10=1.16%, 20=3.14%, 50=11.56% 00:25:03.701 lat (msec) : 100=21.91%, 250=58.47%, 500=3.16% 00:25:03.701 cpu : usr=2.04%, sys=1.34%, ctx=3614, majf=0, minf=1 00:25:03.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:03.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.701 issued rwts: total=0,5259,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.701 job2: (groupid=0, jobs=1): err= 0: pid=308480: Thu Jul 11 23:38:23 2024 00:25:03.701 write: IOPS=392, BW=98.1MiB/s (103MB/s)(988MiB/10074msec); 0 zone resets 00:25:03.701 slat (usec): min=28, max=81897, avg=2153.39, stdev=5556.05 00:25:03.701 clat (usec): min=1919, max=328107, avg=160827.21, stdev=74112.84 00:25:03.701 lat (msec): min=2, max=330, avg=162.98, stdev=75.07 00:25:03.701 clat percentiles (msec): 00:25:03.701 | 1.00th=[ 6], 5.00th=[ 23], 10.00th=[ 52], 20.00th=[ 84], 00:25:03.701 | 30.00th=[ 117], 40.00th=[ 150], 50.00th=[ 176], 60.00th=[ 194], 00:25:03.701 | 70.00th=[ 215], 80.00th=[ 230], 90.00th=[ 245], 95.00th=[ 264], 00:25:03.701 | 99.00th=[ 284], 99.50th=[ 317], 99.90th=[ 326], 99.95th=[ 330], 00:25:03.701 | 99.99th=[ 330] 00:25:03.701 bw ( KiB/s): min=63488, max=176128, per=7.03%, avg=99566.45, stdev=33212.53, samples=20 00:25:03.701 iops : min= 248, max= 688, avg=388.90, stdev=129.66, samples=20 00:25:03.701 lat (msec) : 2=0.03%, 4=0.35%, 10=1.85%, 20=2.25%, 50=5.34% 00:25:03.701 lat (msec) : 100=16.54%, 250=65.62%, 500=8.02% 00:25:03.701 cpu : usr=1.54%, sys=1.03%, ctx=1743, majf=0, minf=1 00:25:03.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:03.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.701 issued rwts: total=0,3953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.701 job3: (groupid=0, jobs=1): err= 0: pid=308481: Thu Jul 11 23:38:23 2024 00:25:03.701 write: IOPS=445, BW=111MiB/s (117MB/s)(1133MiB/10164msec); 0 zone resets 00:25:03.701 slat (usec): min=29, max=89053, avg=1524.29, stdev=4583.09 00:25:03.701 clat (msec): min=2, max=354, avg=141.97, stdev=75.65 00:25:03.701 lat (msec): min=2, max=354, avg=143.49, stdev=76.78 00:25:03.701 clat percentiles (msec): 00:25:03.701 | 1.00th=[ 12], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 66], 00:25:03.701 | 30.00th=[ 100], 40.00th=[ 117], 50.00th=[ 136], 60.00th=[ 169], 00:25:03.701 | 70.00th=[ 197], 80.00th=[ 211], 90.00th=[ 228], 95.00th=[ 255], 00:25:03.701 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 347], 99.95th=[ 351], 00:25:03.701 | 99.99th=[ 355] 00:25:03.701 bw ( KiB/s): min=49152, max=236544, per=8.07%, avg=114363.55, stdev=41423.11, samples=20 00:25:03.701 iops : min= 192, max= 924, avg=446.70, stdev=161.83, samples=20 00:25:03.701 lat (msec) : 4=0.15%, 10=0.73%, 20=2.54%, 50=11.63%, 100=15.01% 00:25:03.701 lat (msec) : 250=64.17%, 500=5.76% 00:25:03.701 cpu : usr=1.74%, sys=1.38%, ctx=2791, majf=0, minf=1 00:25:03.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:03.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.701 issued rwts: total=0,4530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.701 job4: (groupid=0, jobs=1): err= 0: pid=308482: Thu Jul 11 23:38:23 2024 00:25:03.701 write: IOPS=460, BW=115MiB/s (121MB/s)(1169MiB/10164msec); 0 zone resets 00:25:03.701 slat (usec): min=28, max=113225, avg=1109.77, stdev=5262.14 00:25:03.701 clat (usec): min=1807, max=487838, avg=137841.07, stdev=87562.80 00:25:03.701 lat (usec): min=1889, max=487905, avg=138950.84, stdev=88580.04 00:25:03.702 clat percentiles (msec): 00:25:03.702 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 52], 00:25:03.702 | 30.00th=[ 78], 40.00th=[ 104], 50.00th=[ 132], 60.00th=[ 155], 00:25:03.702 | 70.00th=[ 180], 80.00th=[ 213], 90.00th=[ 257], 95.00th=[ 292], 00:25:03.702 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 401], 99.95th=[ 481], 00:25:03.702 | 99.99th=[ 489] 00:25:03.702 bw ( KiB/s): min=42496, max=194560, per=8.34%, avg=118106.20, stdev=44423.76, samples=20 00:25:03.702 iops : min= 166, max= 760, avg=461.35, stdev=173.53, samples=20 00:25:03.702 lat (msec) : 2=0.09%, 4=0.45%, 10=2.31%, 20=2.89%, 50=13.67% 00:25:03.702 lat (msec) : 100=19.08%, 250=50.32%, 500=11.21% 00:25:03.702 cpu : usr=1.90%, sys=1.28%, ctx=3474, majf=0, minf=1 00:25:03.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:03.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.702 issued rwts: total=0,4676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.702 job5: (groupid=0, jobs=1): err= 0: pid=308494: Thu Jul 11 23:38:23 2024 00:25:03.702 write: IOPS=544, BW=136MiB/s (143MB/s)(1383MiB/10166msec); 0 zone resets 00:25:03.702 slat (usec): min=22, max=136610, avg=795.64, stdev=3639.37 00:25:03.702 clat (usec): min=1264, max=432359, avg=116458.72, stdev=79385.96 00:25:03.702 lat (usec): min=1305, max=436589, avg=117254.36, stdev=80100.59 00:25:03.702 clat percentiles (msec): 00:25:03.702 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 30], 20.00th=[ 51], 00:25:03.702 | 30.00th=[ 62], 40.00th=[ 82], 50.00th=[ 101], 60.00th=[ 120], 00:25:03.702 | 70.00th=[ 146], 80.00th=[ 180], 90.00th=[ 230], 95.00th=[ 266], 00:25:03.702 | 99.00th=[ 363], 99.50th=[ 380], 99.90th=[ 426], 99.95th=[ 430], 00:25:03.702 | 99.99th=[ 435] 00:25:03.702 bw ( KiB/s): min=73216, max=245248, per=9.88%, avg=139986.65, stdev=50902.11, samples=20 00:25:03.702 iops : min= 286, max= 958, avg=546.80, stdev=198.85, samples=20 00:25:03.702 lat (msec) : 2=0.13%, 4=0.47%, 10=1.86%, 20=3.53%, 50=13.76% 00:25:03.702 lat (msec) : 100=30.18%, 250=43.08%, 500=7.00% 00:25:03.702 cpu : usr=1.91%, sys=1.70%, ctx=4101, majf=0, minf=1 00:25:03.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:03.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.702 issued rwts: total=0,5531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.702 job6: (groupid=0, jobs=1): err= 0: pid=308495: Thu Jul 11 23:38:23 2024 00:25:03.702 write: IOPS=443, BW=111MiB/s (116MB/s)(1118MiB/10077msec); 0 zone resets 00:25:03.702 slat (usec): min=18, max=82632, avg=1769.76, stdev=4562.03 00:25:03.702 clat (usec): min=1917, max=404029, avg=142442.41, stdev=69582.67 00:25:03.702 lat (usec): min=1994, max=404287, avg=144212.17, stdev=70506.20 00:25:03.702 clat percentiles (msec): 00:25:03.702 | 1.00th=[ 10], 5.00th=[ 25], 10.00th=[ 45], 20.00th=[ 79], 00:25:03.702 | 30.00th=[ 97], 40.00th=[ 130], 50.00th=[ 148], 60.00th=[ 165], 00:25:03.702 | 70.00th=[ 182], 80.00th=[ 203], 90.00th=[ 228], 95.00th=[ 253], 00:25:03.702 | 99.00th=[ 288], 99.50th=[ 342], 99.90th=[ 393], 99.95th=[ 397], 00:25:03.702 | 99.99th=[ 405] 00:25:03.702 bw ( KiB/s): min=61440, max=197120, per=7.97%, avg=112817.00, stdev=37856.83, samples=20 00:25:03.702 iops : min= 240, max= 770, avg=440.65, stdev=147.91, samples=20 00:25:03.702 lat (msec) : 2=0.02%, 4=0.11%, 10=0.94%, 20=2.44%, 50=7.63% 00:25:03.702 lat (msec) : 100=19.49%, 250=63.58%, 500=5.79% 00:25:03.702 cpu : usr=1.53%, sys=1.43%, ctx=2239, majf=0, minf=1 00:25:03.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:03.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.702 issued rwts: total=0,4470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.702 job7: (groupid=0, jobs=1): err= 0: pid=308496: Thu Jul 11 23:38:23 2024 00:25:03.702 write: IOPS=599, BW=150MiB/s (157MB/s)(1510MiB/10077msec); 0 zone resets 00:25:03.702 slat (usec): min=18, max=57999, avg=1092.40, stdev=3004.88 00:25:03.702 clat (msec): min=2, max=326, avg=105.64, stdev=54.12 00:25:03.702 lat (msec): min=3, max=334, avg=106.73, stdev=54.65 00:25:03.702 clat percentiles (msec): 00:25:03.702 | 1.00th=[ 11], 5.00th=[ 25], 10.00th=[ 41], 20.00th=[ 62], 00:25:03.702 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 115], 00:25:03.702 | 70.00th=[ 128], 80.00th=[ 150], 90.00th=[ 174], 95.00th=[ 205], 00:25:03.702 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 309], 99.95th=[ 317], 00:25:03.702 | 99.99th=[ 326] 00:25:03.702 bw ( KiB/s): min=81920, max=221696, per=10.80%, avg=153028.95, stdev=39318.06, samples=20 00:25:03.702 iops : min= 320, max= 866, avg=597.75, stdev=153.57, samples=20 00:25:03.702 lat (msec) : 4=0.08%, 10=0.83%, 20=2.62%, 50=8.99%, 100=40.33% 00:25:03.702 lat (msec) : 250=45.58%, 500=1.57% 00:25:03.702 cpu : usr=2.37%, sys=1.66%, ctx=3390, majf=0, minf=1 00:25:03.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:03.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.702 issued rwts: total=0,6040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.702 job8: (groupid=0, jobs=1): err= 0: pid=308497: Thu Jul 11 23:38:23 2024 00:25:03.702 write: IOPS=483, BW=121MiB/s (127MB/s)(1217MiB/10074msec); 0 zone resets 00:25:03.702 slat (usec): min=23, max=75959, avg=1131.41, stdev=4099.46 00:25:03.702 clat (usec): min=1870, max=803805, avg=131253.32, stdev=80600.33 00:25:03.702 lat (usec): min=1926, max=803853, avg=132384.73, stdev=81435.43 00:25:03.702 clat percentiles (msec): 00:25:03.702 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 42], 20.00th=[ 70], 00:25:03.702 | 30.00th=[ 85], 40.00th=[ 102], 50.00th=[ 115], 60.00th=[ 136], 00:25:03.702 | 70.00th=[ 167], 80.00th=[ 197], 90.00th=[ 236], 95.00th=[ 264], 00:25:03.702 | 99.00th=[ 292], 99.50th=[ 451], 99.90th=[ 743], 99.95th=[ 743], 00:25:03.702 | 99.99th=[ 802] 00:25:03.702 bw ( KiB/s): min=60416, max=204697, per=8.69%, avg=123013.55, stdev=45946.20, samples=20 00:25:03.702 iops : min= 236, max= 799, avg=480.45, stdev=179.44, samples=20 00:25:03.702 lat (msec) : 2=0.02%, 4=0.37%, 10=1.13%, 20=2.47%, 50=8.79% 00:25:03.702 lat (msec) : 100=26.73%, 250=52.72%, 500=7.31%, 750=0.41%, 1000=0.04% 00:25:03.702 cpu : usr=2.03%, sys=1.24%, ctx=3372, majf=0, minf=1 00:25:03.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:03.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.702 issued rwts: total=0,4867,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.702 job9: (groupid=0, jobs=1): err= 0: pid=308498: Thu Jul 11 23:38:23 2024 00:25:03.702 write: IOPS=619, BW=155MiB/s (162MB/s)(1560MiB/10073msec); 0 zone resets 00:25:03.702 slat (usec): min=22, max=33223, avg=855.48, stdev=2672.93 00:25:03.702 clat (usec): min=1391, max=677924, avg=102437.30, stdev=74825.04 00:25:03.702 lat (usec): min=1438, max=677994, avg=103292.78, stdev=75304.35 00:25:03.702 clat percentiles (msec): 00:25:03.702 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 22], 20.00th=[ 39], 00:25:03.702 | 30.00th=[ 58], 40.00th=[ 78], 50.00th=[ 94], 60.00th=[ 114], 00:25:03.702 | 70.00th=[ 129], 80.00th=[ 150], 90.00th=[ 190], 95.00th=[ 218], 00:25:03.702 | 99.00th=[ 363], 99.50th=[ 550], 99.90th=[ 651], 99.95th=[ 651], 00:25:03.702 | 99.99th=[ 676] 00:25:03.702 bw ( KiB/s): min=75927, max=264192, per=11.16%, avg=158087.55, stdev=51610.78, samples=20 00:25:03.702 iops : min= 296, max= 1032, avg=617.50, stdev=201.65, samples=20 00:25:03.702 lat (msec) : 2=0.03%, 4=0.35%, 10=2.77%, 20=6.00%, 50=17.22% 00:25:03.702 lat (msec) : 100=26.51%, 250=45.66%, 500=0.95%, 750=0.51% 00:25:03.702 cpu : usr=2.28%, sys=1.71%, ctx=4258, majf=0, minf=1 00:25:03.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:03.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.702 issued rwts: total=0,6238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.702 job10: (groupid=0, jobs=1): err= 0: pid=308499: Thu Jul 11 23:38:23 2024 00:25:03.702 write: IOPS=516, BW=129MiB/s (135MB/s)(1313MiB/10167msec); 0 zone resets 00:25:03.702 slat (usec): min=27, max=59590, avg=991.90, stdev=3259.19 00:25:03.702 clat (msec): min=2, max=671, avg=122.87, stdev=89.16 00:25:03.702 lat (msec): min=2, max=671, avg=123.86, stdev=89.50 00:25:03.702 clat percentiles (msec): 00:25:03.702 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 40], 20.00th=[ 64], 00:25:03.702 | 30.00th=[ 80], 40.00th=[ 88], 50.00th=[ 105], 60.00th=[ 123], 00:25:03.702 | 70.00th=[ 144], 80.00th=[ 169], 90.00th=[ 205], 95.00th=[ 247], 00:25:03.702 | 99.00th=[ 550], 99.50th=[ 575], 99.90th=[ 667], 99.95th=[ 667], 00:25:03.702 | 99.99th=[ 676] 00:25:03.702 bw ( KiB/s): min=44544, max=222208, per=9.38%, avg=132809.25, stdev=50146.74, samples=20 00:25:03.702 iops : min= 174, max= 868, avg=518.75, stdev=195.82, samples=20 00:25:03.702 lat (msec) : 4=0.30%, 10=1.39%, 20=2.80%, 50=9.28%, 100=33.43% 00:25:03.702 lat (msec) : 250=48.00%, 500=3.12%, 750=1.68% 00:25:03.702 cpu : usr=1.78%, sys=1.59%, ctx=3384, majf=0, minf=1 00:25:03.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:03.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.702 issued rwts: total=0,5250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.702 00:25:03.702 Run status group 0 (all jobs): 00:25:03.702 WRITE: bw=1383MiB/s (1450MB/s), 98.1MiB/s-155MiB/s (103MB/s-162MB/s), io=13.7GiB (14.7GB), run=10073-10167msec 00:25:03.702 00:25:03.702 Disk stats (read/write): 00:25:03.702 nvme0n1: ios=45/10491, merge=0/0, ticks=1742/1205879, in_queue=1207621, util=99.45% 00:25:03.702 nvme10n1: ios=53/10455, merge=0/0, ticks=2609/1247174, in_queue=1249783, util=99.68% 00:25:03.702 nvme1n1: ios=42/7539, merge=0/0, ticks=1355/1196859, in_queue=1198214, util=99.69% 00:25:03.702 nvme2n1: ios=49/8998, merge=0/0, ticks=54/1237096, in_queue=1237150, util=97.40% 00:25:03.702 nvme3n1: ios=49/9290, merge=0/0, ticks=2643/1237652, in_queue=1240295, util=100.00% 00:25:03.702 nvme4n1: ios=41/10997, merge=0/0, ticks=1421/1244916, in_queue=1246337, util=100.00% 00:25:03.702 nvme5n1: ios=18/8576, merge=0/0, ticks=148/1207824, in_queue=1207972, util=98.28% 00:25:03.702 nvme6n1: ios=31/11714, merge=0/0, ticks=168/1214568, in_queue=1214736, util=99.12% 00:25:03.702 nvme7n1: ios=0/9361, merge=0/0, ticks=0/1219883, in_queue=1219883, util=98.57% 00:25:03.702 nvme8n1: ios=0/12102, merge=0/0, ticks=0/1220873, in_queue=1220873, util=98.85% 00:25:03.702 nvme9n1: ios=0/10429, merge=0/0, ticks=0/1245797, in_queue=1245797, util=99.05% 00:25:03.703 23:38:23 -- target/multiconnection.sh@36 -- # sync 00:25:03.703 23:38:23 -- target/multiconnection.sh@37 -- # seq 1 11 00:25:03.703 23:38:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.703 23:38:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:03.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:03.703 23:38:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:03.703 23:38:23 -- common/autotest_common.sh@1198 -- # local i=0 00:25:03.703 23:38:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:03.703 23:38:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:25:03.703 23:38:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:03.703 23:38:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:25:03.703 23:38:23 -- common/autotest_common.sh@1210 -- # return 0 00:25:03.703 23:38:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:03.703 23:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.703 23:38:23 -- common/autotest_common.sh@10 -- # set +x 00:25:03.703 23:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.703 23:38:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.703 23:38:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:03.703 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:03.703 23:38:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:03.703 23:38:24 -- common/autotest_common.sh@1198 -- # local i=0 00:25:03.703 23:38:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:03.703 23:38:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:25:03.703 23:38:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:03.703 23:38:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:25:03.703 23:38:24 -- common/autotest_common.sh@1210 -- # return 0 00:25:03.703 23:38:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:03.703 23:38:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.703 23:38:24 -- common/autotest_common.sh@10 -- # set +x 00:25:03.703 23:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.703 23:38:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.703 23:38:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:03.961 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:03.961 23:38:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:03.961 23:38:24 -- common/autotest_common.sh@1198 -- # local i=0 00:25:03.961 23:38:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:03.961 23:38:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:25:03.961 23:38:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:03.961 23:38:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:25:03.961 23:38:24 -- common/autotest_common.sh@1210 -- # return 0 00:25:03.961 23:38:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:03.961 23:38:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.961 23:38:24 -- common/autotest_common.sh@10 -- # set +x 00:25:03.961 23:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.961 23:38:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.961 23:38:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:03.961 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:03.961 23:38:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:03.961 23:38:24 -- common/autotest_common.sh@1198 -- # local i=0 00:25:03.961 23:38:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:03.961 23:38:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:25:04.217 23:38:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:04.217 23:38:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:25:04.217 23:38:24 -- common/autotest_common.sh@1210 -- # return 0 00:25:04.217 23:38:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:04.217 23:38:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.217 23:38:24 -- common/autotest_common.sh@10 -- # set +x 00:25:04.217 23:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.217 23:38:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.217 23:38:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:04.474 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:04.475 23:38:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:04.475 23:38:25 -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.475 23:38:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:04.475 23:38:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:25:04.475 23:38:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:04.475 23:38:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:25:04.475 23:38:25 -- common/autotest_common.sh@1210 -- # return 0 00:25:04.475 23:38:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:04.475 23:38:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.475 23:38:25 -- common/autotest_common.sh@10 -- # set +x 00:25:04.475 23:38:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.475 23:38:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.475 23:38:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:04.475 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:04.475 23:38:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:04.475 23:38:25 -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.475 23:38:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:04.475 23:38:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:25:04.475 23:38:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:04.475 23:38:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:25:04.475 23:38:25 -- common/autotest_common.sh@1210 -- # return 0 00:25:04.475 23:38:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:04.475 23:38:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.475 23:38:25 -- common/autotest_common.sh@10 -- # set +x 00:25:04.475 23:38:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.475 23:38:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.475 23:38:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:04.732 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:04.732 23:38:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:04.732 23:38:25 -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.732 23:38:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:04.732 23:38:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:25:04.732 23:38:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:04.732 23:38:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:25:04.732 23:38:25 -- common/autotest_common.sh@1210 -- # return 0 00:25:04.732 23:38:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:04.732 23:38:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.732 23:38:25 -- common/autotest_common.sh@10 -- # set +x 00:25:04.732 23:38:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.732 23:38:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.732 23:38:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:04.732 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:04.732 23:38:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:04.732 23:38:25 -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.732 23:38:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:04.732 23:38:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:25:04.732 23:38:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:04.732 23:38:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:25:04.990 23:38:25 -- common/autotest_common.sh@1210 -- # return 0 00:25:04.990 23:38:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:04.990 23:38:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.990 23:38:25 -- common/autotest_common.sh@10 -- # set +x 00:25:04.990 23:38:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.990 23:38:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.990 23:38:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:04.990 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:04.990 23:38:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:04.990 23:38:25 -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.990 23:38:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:04.990 23:38:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:25:04.990 23:38:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:04.990 23:38:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:25:04.990 23:38:25 -- common/autotest_common.sh@1210 -- # return 0 00:25:04.990 23:38:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:04.990 23:38:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.990 23:38:25 -- common/autotest_common.sh@10 -- # set +x 00:25:04.990 23:38:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.990 23:38:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.990 23:38:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:04.990 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:04.990 23:38:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:04.990 23:38:25 -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.990 23:38:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:04.990 23:38:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:25:04.990 23:38:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:04.990 23:38:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:25:04.990 23:38:25 -- common/autotest_common.sh@1210 -- # return 0 00:25:04.990 23:38:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:04.990 23:38:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.990 23:38:25 -- common/autotest_common.sh@10 -- # set +x 00:25:04.990 23:38:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.990 23:38:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.990 23:38:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:05.248 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:05.248 23:38:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:05.248 23:38:26 -- common/autotest_common.sh@1198 -- # local i=0 00:25:05.248 23:38:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:05.248 23:38:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:25:05.248 23:38:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:05.248 23:38:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:25:05.248 23:38:26 -- common/autotest_common.sh@1210 -- # return 0 00:25:05.248 23:38:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:05.248 23:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.248 23:38:26 -- common/autotest_common.sh@10 -- # set +x 00:25:05.248 23:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.248 23:38:26 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:05.248 23:38:26 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:05.248 23:38:26 -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:05.248 23:38:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:05.248 23:38:26 -- nvmf/common.sh@116 -- # sync 00:25:05.248 23:38:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:05.248 23:38:26 -- nvmf/common.sh@119 -- # set +e 00:25:05.248 23:38:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:05.248 23:38:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:05.248 rmmod nvme_tcp 00:25:05.248 rmmod nvme_fabrics 00:25:05.248 rmmod nvme_keyring 00:25:05.248 23:38:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:05.248 23:38:26 -- nvmf/common.sh@123 -- # set -e 00:25:05.248 23:38:26 -- nvmf/common.sh@124 -- # return 0 00:25:05.248 23:38:26 -- nvmf/common.sh@477 -- # '[' -n 302903 ']' 00:25:05.248 23:38:26 -- nvmf/common.sh@478 -- # killprocess 302903 00:25:05.248 23:38:26 -- common/autotest_common.sh@926 -- # '[' -z 302903 ']' 00:25:05.248 23:38:26 -- common/autotest_common.sh@930 -- # kill -0 302903 00:25:05.248 23:38:26 -- common/autotest_common.sh@931 -- # uname 00:25:05.248 23:38:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:05.248 23:38:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 302903 00:25:05.248 23:38:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:05.248 23:38:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:05.248 23:38:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 302903' 00:25:05.248 killing process with pid 302903 00:25:05.248 23:38:26 -- common/autotest_common.sh@945 -- # kill 302903 00:25:05.248 23:38:26 -- common/autotest_common.sh@950 -- # wait 302903 00:25:05.814 23:38:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:05.814 23:38:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:05.814 23:38:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:05.814 23:38:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:05.814 23:38:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:05.814 23:38:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.814 23:38:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.814 23:38:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.350 23:38:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:08.350 00:25:08.350 real 1m2.393s 00:25:08.350 user 3m33.527s 00:25:08.350 sys 0m24.476s 00:25:08.350 23:38:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.350 23:38:28 -- common/autotest_common.sh@10 -- # set +x 00:25:08.350 ************************************ 00:25:08.350 END TEST nvmf_multiconnection 00:25:08.350 ************************************ 00:25:08.350 23:38:28 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:08.350 23:38:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:08.350 23:38:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:08.350 23:38:28 -- common/autotest_common.sh@10 -- # set +x 00:25:08.350 ************************************ 00:25:08.350 START TEST nvmf_initiator_timeout 00:25:08.350 ************************************ 00:25:08.350 23:38:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:08.350 * Looking for test storage... 00:25:08.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:08.350 23:38:28 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.350 23:38:28 -- nvmf/common.sh@7 -- # uname -s 00:25:08.350 23:38:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.350 23:38:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.350 23:38:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.350 23:38:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.350 23:38:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.350 23:38:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.350 23:38:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.350 23:38:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.350 23:38:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.350 23:38:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.350 23:38:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:08.350 23:38:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:08.350 23:38:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.350 23:38:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.350 23:38:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.350 23:38:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.350 23:38:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.350 23:38:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.350 23:38:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.350 23:38:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.350 23:38:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.350 23:38:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.350 23:38:28 -- paths/export.sh@5 -- # export PATH 00:25:08.350 23:38:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.350 23:38:28 -- nvmf/common.sh@46 -- # : 0 00:25:08.350 23:38:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:08.350 23:38:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:08.350 23:38:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:08.350 23:38:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.350 23:38:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.350 23:38:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:08.350 23:38:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:08.350 23:38:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:08.350 23:38:28 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:08.350 23:38:28 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:08.350 23:38:28 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:08.350 23:38:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:08.350 23:38:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.350 23:38:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:08.350 23:38:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:08.350 23:38:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:08.350 23:38:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.350 23:38:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.350 23:38:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.350 23:38:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:08.350 23:38:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:08.350 23:38:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:08.350 23:38:28 -- common/autotest_common.sh@10 -- # set +x 00:25:10.888 23:38:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:10.888 23:38:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:10.888 23:38:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:10.888 23:38:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:10.888 23:38:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:10.888 23:38:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:10.888 23:38:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:10.888 23:38:31 -- nvmf/common.sh@294 -- # net_devs=() 00:25:10.888 23:38:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:10.888 23:38:31 -- nvmf/common.sh@295 -- # e810=() 00:25:10.888 23:38:31 -- nvmf/common.sh@295 -- # local -ga e810 00:25:10.888 23:38:31 -- nvmf/common.sh@296 -- # x722=() 00:25:10.888 23:38:31 -- nvmf/common.sh@296 -- # local -ga x722 00:25:10.888 23:38:31 -- nvmf/common.sh@297 -- # mlx=() 00:25:10.888 23:38:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:10.888 23:38:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.888 23:38:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.888 23:38:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.888 23:38:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.888 23:38:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.888 23:38:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.888 23:38:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.888 23:38:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.888 23:38:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.888 23:38:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.888 23:38:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.888 23:38:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:10.888 23:38:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:10.888 23:38:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:10.888 23:38:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:10.888 23:38:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:10.888 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:10.888 23:38:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:10.888 23:38:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:10.888 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:10.888 23:38:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:10.888 23:38:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:10.888 23:38:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.888 23:38:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:10.888 23:38:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.888 23:38:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:10.888 Found net devices under 0000:84:00.0: cvl_0_0 00:25:10.888 23:38:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.888 23:38:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:10.888 23:38:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.888 23:38:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:10.888 23:38:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.888 23:38:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:10.888 Found net devices under 0000:84:00.1: cvl_0_1 00:25:10.888 23:38:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.888 23:38:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:10.888 23:38:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:10.888 23:38:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:10.888 23:38:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:10.888 23:38:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.888 23:38:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.888 23:38:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.888 23:38:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:10.888 23:38:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.888 23:38:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.888 23:38:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:10.888 23:38:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.888 23:38:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.888 23:38:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:10.888 23:38:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:10.888 23:38:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.888 23:38:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.888 23:38:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.888 23:38:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.888 23:38:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:10.888 23:38:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.889 23:38:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.889 23:38:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.889 23:38:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:10.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:25:10.889 00:25:10.889 --- 10.0.0.2 ping statistics --- 00:25:10.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.889 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:10.889 23:38:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:25:10.889 00:25:10.889 --- 10.0.0.1 ping statistics --- 00:25:10.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.889 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:25:10.889 23:38:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.889 23:38:31 -- nvmf/common.sh@410 -- # return 0 00:25:10.889 23:38:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:10.889 23:38:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.889 23:38:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:10.889 23:38:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:10.889 23:38:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.889 23:38:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:10.889 23:38:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:10.889 23:38:31 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:10.889 23:38:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:10.889 23:38:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:10.889 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:25:10.889 23:38:31 -- nvmf/common.sh@469 -- # nvmfpid=312030 00:25:10.889 23:38:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:10.889 23:38:31 -- nvmf/common.sh@470 -- # waitforlisten 312030 00:25:10.889 23:38:31 -- common/autotest_common.sh@819 -- # '[' -z 312030 ']' 00:25:10.889 23:38:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.889 23:38:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:10.889 23:38:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.889 23:38:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:10.889 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:25:10.889 [2024-07-11 23:38:31.682637] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:10.889 [2024-07-11 23:38:31.682735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.889 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.889 [2024-07-11 23:38:31.757937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:11.147 [2024-07-11 23:38:31.858366] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:11.147 [2024-07-11 23:38:31.858518] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.147 [2024-07-11 23:38:31.858547] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.147 [2024-07-11 23:38:31.858563] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.147 [2024-07-11 23:38:31.858618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.147 [2024-07-11 23:38:31.858675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.147 [2024-07-11 23:38:31.858725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.147 [2024-07-11 23:38:31.858728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.079 23:38:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:12.079 23:38:32 -- common/autotest_common.sh@852 -- # return 0 00:25:12.079 23:38:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:12.079 23:38:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:12.079 23:38:32 -- common/autotest_common.sh@10 -- # set +x 00:25:12.079 23:38:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.079 23:38:32 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:12.079 23:38:32 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:12.079 23:38:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.079 23:38:32 -- common/autotest_common.sh@10 -- # set +x 00:25:12.079 Malloc0 00:25:12.079 23:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.079 23:38:32 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:12.079 23:38:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.079 23:38:32 -- common/autotest_common.sh@10 -- # set +x 00:25:12.079 Delay0 00:25:12.079 23:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.079 23:38:32 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:12.079 23:38:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.079 23:38:32 -- common/autotest_common.sh@10 -- # set +x 00:25:12.079 [2024-07-11 23:38:32.800463] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.079 23:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.079 23:38:32 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:12.079 23:38:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.079 23:38:32 -- common/autotest_common.sh@10 -- # set +x 00:25:12.079 23:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.079 23:38:32 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:12.079 23:38:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.079 23:38:32 -- common/autotest_common.sh@10 -- # set +x 00:25:12.079 23:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.079 23:38:32 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.079 23:38:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.079 23:38:32 -- common/autotest_common.sh@10 -- # set +x 00:25:12.079 [2024-07-11 23:38:32.828781] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.079 23:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.079 23:38:32 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:12.643 23:38:33 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:12.643 23:38:33 -- common/autotest_common.sh@1177 -- # local i=0 00:25:12.643 23:38:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:25:12.643 23:38:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:25:12.643 23:38:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:25:14.538 23:38:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:25:14.538 23:38:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:25:14.538 23:38:35 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:25:14.538 23:38:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:25:14.538 23:38:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:25:14.538 23:38:35 -- common/autotest_common.sh@1187 -- # return 0 00:25:14.538 23:38:35 -- target/initiator_timeout.sh@35 -- # fio_pid=312476 00:25:14.538 23:38:35 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:14.538 23:38:35 -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:14.538 [global] 00:25:14.538 thread=1 00:25:14.538 invalidate=1 00:25:14.538 rw=write 00:25:14.538 time_based=1 00:25:14.538 runtime=60 00:25:14.538 ioengine=libaio 00:25:14.538 direct=1 00:25:14.538 bs=4096 00:25:14.538 iodepth=1 00:25:14.538 norandommap=0 00:25:14.538 numjobs=1 00:25:14.538 00:25:14.538 verify_dump=1 00:25:14.538 verify_backlog=512 00:25:14.538 verify_state_save=0 00:25:14.538 do_verify=1 00:25:14.538 verify=crc32c-intel 00:25:14.538 [job0] 00:25:14.538 filename=/dev/nvme0n1 00:25:14.538 Could not set queue depth (nvme0n1) 00:25:14.795 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:14.795 fio-3.35 00:25:14.795 Starting 1 thread 00:25:18.072 23:38:38 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:18.072 23:38:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.072 23:38:38 -- common/autotest_common.sh@10 -- # set +x 00:25:18.072 true 00:25:18.072 23:38:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.072 23:38:38 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:18.072 23:38:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.072 23:38:38 -- common/autotest_common.sh@10 -- # set +x 00:25:18.072 true 00:25:18.072 23:38:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.072 23:38:38 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:18.072 23:38:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.072 23:38:38 -- common/autotest_common.sh@10 -- # set +x 00:25:18.072 true 00:25:18.072 23:38:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.072 23:38:38 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:18.072 23:38:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.072 23:38:38 -- common/autotest_common.sh@10 -- # set +x 00:25:18.072 true 00:25:18.072 23:38:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.072 23:38:38 -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:20.650 23:38:41 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:20.650 23:38:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.650 23:38:41 -- common/autotest_common.sh@10 -- # set +x 00:25:20.650 true 00:25:20.650 23:38:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.650 23:38:41 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:20.650 23:38:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.650 23:38:41 -- common/autotest_common.sh@10 -- # set +x 00:25:20.650 true 00:25:20.650 23:38:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.650 23:38:41 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:20.650 23:38:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.650 23:38:41 -- common/autotest_common.sh@10 -- # set +x 00:25:20.650 true 00:25:20.650 23:38:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.650 23:38:41 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:20.650 23:38:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.650 23:38:41 -- common/autotest_common.sh@10 -- # set +x 00:25:20.650 true 00:25:20.650 23:38:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.650 23:38:41 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:20.650 23:38:41 -- target/initiator_timeout.sh@54 -- # wait 312476 00:26:16.885 00:26:16.886 job0: (groupid=0, jobs=1): err= 0: pid=312667: Thu Jul 11 23:39:35 2024 00:26:16.886 read: IOPS=54, BW=216KiB/s (221kB/s)(12.7MiB/60001msec) 00:26:16.886 slat (usec): min=5, max=8770, avg=19.82, stdev=154.02 00:26:16.886 clat (usec): min=327, max=41033k, avg=18049.36, stdev=720681.35 00:26:16.886 lat (usec): min=341, max=41033k, avg=18069.18, stdev=720681.46 00:26:16.886 clat percentiles (usec): 00:26:16.886 | 1.00th=[ 383], 5.00th=[ 412], 10.00th=[ 424], 00:26:16.886 | 20.00th=[ 445], 30.00th=[ 461], 40.00th=[ 490], 00:26:16.886 | 50.00th=[ 529], 60.00th=[ 578], 70.00th=[ 676], 00:26:16.886 | 80.00th=[ 766], 90.00th=[ 41157], 95.00th=[ 41157], 00:26:16.886 | 99.00th=[ 41681], 99.50th=[ 42206], 99.90th=[ 44303], 00:26:16.886 | 99.95th=[ 44827], 99.99th=[17112761] 00:26:16.886 write: IOPS=59, BW=239KiB/s (245kB/s)(14.0MiB/60001msec); 0 zone resets 00:26:16.886 slat (nsec): min=6265, max=76767, avg=19863.75, stdev=11073.52 00:26:16.886 clat (usec): min=213, max=1140, avg=366.51, stdev=109.50 00:26:16.886 lat (usec): min=221, max=1149, avg=386.37, stdev=118.20 00:26:16.886 clat percentiles (usec): 00:26:16.886 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 255], 20.00th=[ 285], 00:26:16.886 | 30.00th=[ 302], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 343], 00:26:16.886 | 70.00th=[ 396], 80.00th=[ 469], 90.00th=[ 562], 95.00th=[ 594], 00:26:16.886 | 99.00th=[ 635], 99.50th=[ 652], 99.90th=[ 676], 99.95th=[ 1090], 00:26:16.886 | 99.99th=[ 1139] 00:26:16.886 bw ( KiB/s): min= 2544, max= 5648, per=100.00%, avg=4096.00, stdev=981.57, samples=6 00:26:16.886 iops : min= 636, max= 1412, avg=1024.00, stdev=245.39, samples=6 00:26:16.886 lat (usec) : 250=4.37%, 500=59.64%, 750=24.08%, 1000=6.12% 00:26:16.886 lat (msec) : 2=0.09%, 10=0.01%, 50=5.67%, >=2000=0.01% 00:26:16.886 cpu : usr=0.14%, sys=0.27%, ctx=6827, majf=0, minf=2 00:26:16.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:16.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.886 issued rwts: total=3242,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:16.886 00:26:16.886 Run status group 0 (all jobs): 00:26:16.886 READ: bw=216KiB/s (221kB/s), 216KiB/s-216KiB/s (221kB/s-221kB/s), io=12.7MiB (13.3MB), run=60001-60001msec 00:26:16.886 WRITE: bw=239KiB/s (245kB/s), 239KiB/s-239KiB/s (245kB/s-245kB/s), io=14.0MiB (14.7MB), run=60001-60001msec 00:26:16.886 00:26:16.886 Disk stats (read/write): 00:26:16.886 nvme0n1: ios=3176/3584, merge=0/0, ticks=17464/1247, in_queue=18711, util=99.79% 00:26:16.886 23:39:35 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:16.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:16.886 23:39:35 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:16.886 23:39:35 -- common/autotest_common.sh@1198 -- # local i=0 00:26:16.886 23:39:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:26:16.886 23:39:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:16.886 23:39:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:16.886 23:39:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:16.886 23:39:35 -- common/autotest_common.sh@1210 -- # return 0 00:26:16.886 23:39:35 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:16.886 23:39:35 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:16.886 nvmf hotplug test: fio successful as expected 00:26:16.886 23:39:35 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:16.886 23:39:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.886 23:39:35 -- common/autotest_common.sh@10 -- # set +x 00:26:16.886 23:39:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.886 23:39:35 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:16.886 23:39:35 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:16.886 23:39:35 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:16.886 23:39:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:16.886 23:39:35 -- nvmf/common.sh@116 -- # sync 00:26:16.886 23:39:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:16.886 23:39:35 -- nvmf/common.sh@119 -- # set +e 00:26:16.886 23:39:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:16.886 23:39:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:16.886 rmmod nvme_tcp 00:26:16.886 rmmod nvme_fabrics 00:26:16.886 rmmod nvme_keyring 00:26:16.886 23:39:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:16.886 23:39:35 -- nvmf/common.sh@123 -- # set -e 00:26:16.886 23:39:35 -- nvmf/common.sh@124 -- # return 0 00:26:16.886 23:39:35 -- nvmf/common.sh@477 -- # '[' -n 312030 ']' 00:26:16.886 23:39:35 -- nvmf/common.sh@478 -- # killprocess 312030 00:26:16.886 23:39:35 -- common/autotest_common.sh@926 -- # '[' -z 312030 ']' 00:26:16.886 23:39:35 -- common/autotest_common.sh@930 -- # kill -0 312030 00:26:16.886 23:39:35 -- common/autotest_common.sh@931 -- # uname 00:26:16.886 23:39:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:16.886 23:39:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 312030 00:26:16.886 23:39:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:16.886 23:39:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:16.886 23:39:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 312030' 00:26:16.886 killing process with pid 312030 00:26:16.886 23:39:35 -- common/autotest_common.sh@945 -- # kill 312030 00:26:16.886 23:39:35 -- common/autotest_common.sh@950 -- # wait 312030 00:26:16.886 23:39:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:16.886 23:39:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:16.886 23:39:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:16.886 23:39:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:16.886 23:39:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:16.886 23:39:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.886 23:39:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.886 23:39:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.455 23:39:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:17.455 00:26:17.455 real 1m9.545s 00:26:17.455 user 4m14.156s 00:26:17.455 sys 0m7.217s 00:26:17.455 23:39:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:17.455 23:39:38 -- common/autotest_common.sh@10 -- # set +x 00:26:17.455 ************************************ 00:26:17.455 END TEST nvmf_initiator_timeout 00:26:17.455 ************************************ 00:26:17.455 23:39:38 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:26:17.455 23:39:38 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:26:17.455 23:39:38 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:26:17.455 23:39:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:17.455 23:39:38 -- common/autotest_common.sh@10 -- # set +x 00:26:19.989 23:39:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:19.989 23:39:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:19.989 23:39:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:19.989 23:39:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:19.989 23:39:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:19.989 23:39:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:19.989 23:39:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:19.989 23:39:40 -- nvmf/common.sh@294 -- # net_devs=() 00:26:19.989 23:39:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:19.989 23:39:40 -- nvmf/common.sh@295 -- # e810=() 00:26:19.989 23:39:40 -- nvmf/common.sh@295 -- # local -ga e810 00:26:19.989 23:39:40 -- nvmf/common.sh@296 -- # x722=() 00:26:19.989 23:39:40 -- nvmf/common.sh@296 -- # local -ga x722 00:26:19.989 23:39:40 -- nvmf/common.sh@297 -- # mlx=() 00:26:19.989 23:39:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:19.989 23:39:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.989 23:39:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.989 23:39:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.989 23:39:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.989 23:39:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.989 23:39:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.989 23:39:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.989 23:39:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.989 23:39:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.989 23:39:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.989 23:39:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.989 23:39:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:19.989 23:39:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:19.989 23:39:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:19.989 23:39:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:19.989 23:39:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:19.989 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:19.989 23:39:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:19.989 23:39:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:19.989 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:19.989 23:39:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:19.989 23:39:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:19.989 23:39:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.989 23:39:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:19.989 23:39:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.989 23:39:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:19.989 Found net devices under 0000:84:00.0: cvl_0_0 00:26:19.989 23:39:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.989 23:39:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:19.989 23:39:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.989 23:39:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:19.989 23:39:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.989 23:39:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:19.989 Found net devices under 0000:84:00.1: cvl_0_1 00:26:19.989 23:39:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.989 23:39:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:19.989 23:39:40 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.989 23:39:40 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:26:19.989 23:39:40 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:19.989 23:39:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:19.989 23:39:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:19.989 23:39:40 -- common/autotest_common.sh@10 -- # set +x 00:26:19.989 ************************************ 00:26:19.989 START TEST nvmf_perf_adq 00:26:19.989 ************************************ 00:26:19.989 23:39:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:19.989 * Looking for test storage... 00:26:19.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:19.989 23:39:40 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.989 23:39:40 -- nvmf/common.sh@7 -- # uname -s 00:26:19.989 23:39:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.989 23:39:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.989 23:39:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.989 23:39:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.989 23:39:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.989 23:39:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.989 23:39:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.989 23:39:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.989 23:39:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.989 23:39:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.989 23:39:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:19.989 23:39:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:19.989 23:39:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.989 23:39:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.989 23:39:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.989 23:39:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:19.989 23:39:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.989 23:39:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.989 23:39:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.989 23:39:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.990 23:39:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.990 23:39:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.990 23:39:40 -- paths/export.sh@5 -- # export PATH 00:26:19.990 23:39:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.990 23:39:40 -- nvmf/common.sh@46 -- # : 0 00:26:19.990 23:39:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:19.990 23:39:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:19.990 23:39:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:19.990 23:39:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.990 23:39:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.990 23:39:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:19.990 23:39:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:19.990 23:39:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:19.990 23:39:40 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:19.990 23:39:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:19.990 23:39:40 -- common/autotest_common.sh@10 -- # set +x 00:26:22.524 23:39:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:22.524 23:39:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:22.524 23:39:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:22.524 23:39:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:22.524 23:39:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:22.524 23:39:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:22.524 23:39:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:22.524 23:39:43 -- nvmf/common.sh@294 -- # net_devs=() 00:26:22.524 23:39:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:22.524 23:39:43 -- nvmf/common.sh@295 -- # e810=() 00:26:22.524 23:39:43 -- nvmf/common.sh@295 -- # local -ga e810 00:26:22.524 23:39:43 -- nvmf/common.sh@296 -- # x722=() 00:26:22.524 23:39:43 -- nvmf/common.sh@296 -- # local -ga x722 00:26:22.524 23:39:43 -- nvmf/common.sh@297 -- # mlx=() 00:26:22.524 23:39:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:22.524 23:39:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.524 23:39:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.524 23:39:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.524 23:39:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.524 23:39:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.524 23:39:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.524 23:39:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.524 23:39:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.524 23:39:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.524 23:39:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.524 23:39:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.524 23:39:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:22.524 23:39:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:22.524 23:39:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:22.524 23:39:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:22.524 23:39:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:22.524 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:22.524 23:39:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:22.524 23:39:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:22.524 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:22.524 23:39:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:22.524 23:39:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:22.524 23:39:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:22.524 23:39:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.524 23:39:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:22.524 23:39:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.524 23:39:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:22.524 Found net devices under 0000:84:00.0: cvl_0_0 00:26:22.524 23:39:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.524 23:39:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:22.524 23:39:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.524 23:39:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:22.524 23:39:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.524 23:39:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:22.524 Found net devices under 0000:84:00.1: cvl_0_1 00:26:22.524 23:39:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.524 23:39:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:22.524 23:39:43 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.524 23:39:43 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:22.524 23:39:43 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:22.524 23:39:43 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:26:22.524 23:39:43 -- target/perf_adq.sh@52 -- # rmmod ice 00:26:23.090 23:39:44 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:25.625 23:39:46 -- target/perf_adq.sh@54 -- # sleep 5 00:26:30.896 23:39:51 -- target/perf_adq.sh@67 -- # nvmftestinit 00:26:30.896 23:39:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:30.896 23:39:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.896 23:39:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:30.896 23:39:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:30.896 23:39:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:30.896 23:39:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.896 23:39:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.896 23:39:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.896 23:39:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:30.896 23:39:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:30.896 23:39:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:30.896 23:39:51 -- common/autotest_common.sh@10 -- # set +x 00:26:30.896 23:39:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:30.896 23:39:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:30.896 23:39:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:30.896 23:39:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:30.897 23:39:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:30.897 23:39:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:30.897 23:39:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:30.897 23:39:51 -- nvmf/common.sh@294 -- # net_devs=() 00:26:30.897 23:39:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:30.897 23:39:51 -- nvmf/common.sh@295 -- # e810=() 00:26:30.897 23:39:51 -- nvmf/common.sh@295 -- # local -ga e810 00:26:30.897 23:39:51 -- nvmf/common.sh@296 -- # x722=() 00:26:30.897 23:39:51 -- nvmf/common.sh@296 -- # local -ga x722 00:26:30.897 23:39:51 -- nvmf/common.sh@297 -- # mlx=() 00:26:30.897 23:39:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:30.897 23:39:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.897 23:39:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.897 23:39:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.897 23:39:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.897 23:39:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.897 23:39:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.897 23:39:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.897 23:39:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.897 23:39:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.897 23:39:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.897 23:39:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.897 23:39:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:30.897 23:39:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:30.897 23:39:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:30.897 23:39:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:30.897 23:39:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:30.897 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:30.897 23:39:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:30.897 23:39:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:30.897 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:30.897 23:39:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:30.897 23:39:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:30.897 23:39:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.897 23:39:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:30.897 23:39:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.897 23:39:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:30.897 Found net devices under 0000:84:00.0: cvl_0_0 00:26:30.897 23:39:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.897 23:39:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:30.897 23:39:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.897 23:39:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:30.897 23:39:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.897 23:39:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:30.897 Found net devices under 0000:84:00.1: cvl_0_1 00:26:30.897 23:39:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.897 23:39:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:30.897 23:39:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:30.897 23:39:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:30.897 23:39:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.897 23:39:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.897 23:39:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.897 23:39:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:30.897 23:39:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.897 23:39:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.897 23:39:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:30.897 23:39:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.897 23:39:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.897 23:39:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:30.897 23:39:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:30.897 23:39:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.897 23:39:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.897 23:39:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.897 23:39:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.897 23:39:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:30.897 23:39:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.897 23:39:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.897 23:39:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.897 23:39:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:30.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:26:30.897 00:26:30.897 --- 10.0.0.2 ping statistics --- 00:26:30.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.897 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:26:30.897 23:39:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:26:30.897 00:26:30.897 --- 10.0.0.1 ping statistics --- 00:26:30.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.897 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:26:30.897 23:39:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.897 23:39:51 -- nvmf/common.sh@410 -- # return 0 00:26:30.897 23:39:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:30.897 23:39:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.897 23:39:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:30.897 23:39:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.897 23:39:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:30.897 23:39:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:30.897 23:39:51 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:30.897 23:39:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:30.897 23:39:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:30.897 23:39:51 -- common/autotest_common.sh@10 -- # set +x 00:26:30.897 23:39:51 -- nvmf/common.sh@469 -- # nvmfpid=325090 00:26:30.897 23:39:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:30.897 23:39:51 -- nvmf/common.sh@470 -- # waitforlisten 325090 00:26:30.897 23:39:51 -- common/autotest_common.sh@819 -- # '[' -z 325090 ']' 00:26:30.897 23:39:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.897 23:39:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:30.897 23:39:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.897 23:39:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:30.897 23:39:51 -- common/autotest_common.sh@10 -- # set +x 00:26:30.897 [2024-07-11 23:39:51.286743] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:30.897 [2024-07-11 23:39:51.286839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.897 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.897 [2024-07-11 23:39:51.367457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:30.897 [2024-07-11 23:39:51.461744] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:30.897 [2024-07-11 23:39:51.461903] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.897 [2024-07-11 23:39:51.461925] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.897 [2024-07-11 23:39:51.461940] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.897 [2024-07-11 23:39:51.461999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.897 [2024-07-11 23:39:51.462030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.897 [2024-07-11 23:39:51.462083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:30.897 [2024-07-11 23:39:51.462086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.897 23:39:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:30.897 23:39:51 -- common/autotest_common.sh@852 -- # return 0 00:26:30.897 23:39:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:30.897 23:39:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:30.897 23:39:51 -- common/autotest_common.sh@10 -- # set +x 00:26:30.897 23:39:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.897 23:39:51 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:26:30.897 23:39:51 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:30.897 23:39:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.897 23:39:51 -- common/autotest_common.sh@10 -- # set +x 00:26:30.897 23:39:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.897 23:39:51 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:30.897 23:39:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.897 23:39:51 -- common/autotest_common.sh@10 -- # set +x 00:26:30.897 23:39:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.897 23:39:51 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:30.897 23:39:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.897 23:39:51 -- common/autotest_common.sh@10 -- # set +x 00:26:30.897 [2024-07-11 23:39:51.724348] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.897 23:39:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.897 23:39:51 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:30.897 23:39:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.897 23:39:51 -- common/autotest_common.sh@10 -- # set +x 00:26:30.898 Malloc1 00:26:30.898 23:39:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.898 23:39:51 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:30.898 23:39:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.898 23:39:51 -- common/autotest_common.sh@10 -- # set +x 00:26:30.898 23:39:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.898 23:39:51 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:30.898 23:39:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.898 23:39:51 -- common/autotest_common.sh@10 -- # set +x 00:26:30.898 23:39:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.898 23:39:51 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:30.898 23:39:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.898 23:39:51 -- common/autotest_common.sh@10 -- # set +x 00:26:30.898 [2024-07-11 23:39:51.778907] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.898 23:39:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.898 23:39:51 -- target/perf_adq.sh@73 -- # perfpid=325132 00:26:30.898 23:39:51 -- target/perf_adq.sh@74 -- # sleep 2 00:26:30.898 23:39:51 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:30.898 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.435 23:39:53 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:26:33.435 23:39:53 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:33.435 23:39:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.435 23:39:53 -- target/perf_adq.sh@76 -- # wc -l 00:26:33.435 23:39:53 -- common/autotest_common.sh@10 -- # set +x 00:26:33.435 23:39:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.435 23:39:53 -- target/perf_adq.sh@76 -- # count=4 00:26:33.435 23:39:53 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:26:33.435 23:39:53 -- target/perf_adq.sh@81 -- # wait 325132 00:26:41.542 Initializing NVMe Controllers 00:26:41.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:41.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:41.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:41.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:41.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:41.542 Initialization complete. Launching workers. 00:26:41.542 ======================================================== 00:26:41.542 Latency(us) 00:26:41.542 Device Information : IOPS MiB/s Average min max 00:26:41.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10862.77 42.43 5910.43 1087.34 46268.55 00:26:41.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10931.36 42.70 5856.58 1061.59 9605.89 00:26:41.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11066.86 43.23 5785.22 1029.32 9406.62 00:26:41.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10939.16 42.73 5850.79 939.93 9822.96 00:26:41.542 ======================================================== 00:26:41.542 Total : 43800.16 171.09 5850.46 939.93 46268.55 00:26:41.542 00:26:41.542 23:40:01 -- target/perf_adq.sh@82 -- # nvmftestfini 00:26:41.542 23:40:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:41.542 23:40:01 -- nvmf/common.sh@116 -- # sync 00:26:41.542 23:40:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:41.542 23:40:01 -- nvmf/common.sh@119 -- # set +e 00:26:41.542 23:40:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:41.542 23:40:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:41.542 rmmod nvme_tcp 00:26:41.542 rmmod nvme_fabrics 00:26:41.542 rmmod nvme_keyring 00:26:41.542 23:40:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:41.542 23:40:02 -- nvmf/common.sh@123 -- # set -e 00:26:41.542 23:40:02 -- nvmf/common.sh@124 -- # return 0 00:26:41.542 23:40:02 -- nvmf/common.sh@477 -- # '[' -n 325090 ']' 00:26:41.542 23:40:02 -- nvmf/common.sh@478 -- # killprocess 325090 00:26:41.542 23:40:02 -- common/autotest_common.sh@926 -- # '[' -z 325090 ']' 00:26:41.542 23:40:02 -- common/autotest_common.sh@930 -- # kill -0 325090 00:26:41.542 23:40:02 -- common/autotest_common.sh@931 -- # uname 00:26:41.542 23:40:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:41.542 23:40:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 325090 00:26:41.542 23:40:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:41.542 23:40:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:41.542 23:40:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 325090' 00:26:41.542 killing process with pid 325090 00:26:41.542 23:40:02 -- common/autotest_common.sh@945 -- # kill 325090 00:26:41.542 23:40:02 -- common/autotest_common.sh@950 -- # wait 325090 00:26:41.542 23:40:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:41.542 23:40:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:41.542 23:40:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:41.542 23:40:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:41.542 23:40:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:41.542 23:40:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.542 23:40:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.542 23:40:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.075 23:40:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:44.075 23:40:04 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:26:44.075 23:40:04 -- target/perf_adq.sh@52 -- # rmmod ice 00:26:44.332 23:40:05 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:46.920 23:40:07 -- target/perf_adq.sh@54 -- # sleep 5 00:26:52.224 23:40:12 -- target/perf_adq.sh@87 -- # nvmftestinit 00:26:52.224 23:40:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:52.224 23:40:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.224 23:40:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:52.224 23:40:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:52.224 23:40:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:52.224 23:40:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.224 23:40:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.224 23:40:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.224 23:40:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:52.224 23:40:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:52.224 23:40:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:52.224 23:40:12 -- common/autotest_common.sh@10 -- # set +x 00:26:52.224 23:40:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:52.224 23:40:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:52.224 23:40:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:52.224 23:40:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:52.224 23:40:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:52.224 23:40:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:52.224 23:40:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:52.224 23:40:12 -- nvmf/common.sh@294 -- # net_devs=() 00:26:52.224 23:40:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:52.224 23:40:12 -- nvmf/common.sh@295 -- # e810=() 00:26:52.224 23:40:12 -- nvmf/common.sh@295 -- # local -ga e810 00:26:52.224 23:40:12 -- nvmf/common.sh@296 -- # x722=() 00:26:52.224 23:40:12 -- nvmf/common.sh@296 -- # local -ga x722 00:26:52.224 23:40:12 -- nvmf/common.sh@297 -- # mlx=() 00:26:52.224 23:40:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:52.224 23:40:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:52.225 23:40:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:52.225 23:40:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:52.225 23:40:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:52.225 23:40:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:52.225 23:40:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:52.225 23:40:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:52.225 23:40:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:52.225 23:40:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:52.225 23:40:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:52.225 23:40:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:52.225 23:40:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:52.225 23:40:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:52.225 23:40:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:52.225 23:40:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:52.225 23:40:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:52.225 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:52.225 23:40:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:52.225 23:40:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:52.225 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:52.225 23:40:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:52.225 23:40:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:52.225 23:40:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.225 23:40:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:52.225 23:40:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.225 23:40:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:52.225 Found net devices under 0000:84:00.0: cvl_0_0 00:26:52.225 23:40:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.225 23:40:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:52.225 23:40:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.225 23:40:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:52.225 23:40:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.225 23:40:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:52.225 Found net devices under 0000:84:00.1: cvl_0_1 00:26:52.225 23:40:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.225 23:40:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:52.225 23:40:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:52.225 23:40:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:52.225 23:40:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:52.225 23:40:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:52.225 23:40:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:52.225 23:40:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:52.225 23:40:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:52.225 23:40:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:52.225 23:40:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:52.225 23:40:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:52.225 23:40:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:52.225 23:40:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:52.225 23:40:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:52.225 23:40:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:52.225 23:40:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:52.225 23:40:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:52.225 23:40:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.225 23:40:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:52.225 23:40:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.225 23:40:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:52.225 23:40:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:52.225 23:40:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:52.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:26:52.225 00:26:52.225 --- 10.0.0.2 ping statistics --- 00:26:52.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.225 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:26:52.225 23:40:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:52.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:26:52.225 00:26:52.225 --- 10.0.0.1 ping statistics --- 00:26:52.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.225 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:26:52.225 23:40:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.225 23:40:12 -- nvmf/common.sh@410 -- # return 0 00:26:52.225 23:40:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:52.225 23:40:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:52.225 23:40:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:52.225 23:40:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:52.225 23:40:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:52.225 23:40:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:52.225 23:40:12 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:26:52.225 23:40:12 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:52.225 23:40:12 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:52.225 23:40:12 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:52.225 net.core.busy_poll = 1 00:26:52.225 23:40:12 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:52.225 net.core.busy_read = 1 00:26:52.225 23:40:12 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:52.225 23:40:12 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:52.225 23:40:12 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:52.225 23:40:12 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:52.225 23:40:12 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:52.225 23:40:12 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:52.225 23:40:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:52.225 23:40:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:52.225 23:40:12 -- common/autotest_common.sh@10 -- # set +x 00:26:52.225 23:40:12 -- nvmf/common.sh@469 -- # nvmfpid=327804 00:26:52.225 23:40:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:52.225 23:40:12 -- nvmf/common.sh@470 -- # waitforlisten 327804 00:26:52.225 23:40:12 -- common/autotest_common.sh@819 -- # '[' -z 327804 ']' 00:26:52.225 23:40:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.225 23:40:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:52.225 23:40:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.225 23:40:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:52.225 23:40:12 -- common/autotest_common.sh@10 -- # set +x 00:26:52.225 [2024-07-11 23:40:12.746977] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:52.225 [2024-07-11 23:40:12.747136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.225 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.225 [2024-07-11 23:40:12.854592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:52.225 [2024-07-11 23:40:12.947704] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:52.225 [2024-07-11 23:40:12.947870] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.225 [2024-07-11 23:40:12.947891] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.225 [2024-07-11 23:40:12.947906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.225 [2024-07-11 23:40:12.947961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.225 [2024-07-11 23:40:12.947996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:52.225 [2024-07-11 23:40:12.948059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:52.225 [2024-07-11 23:40:12.948061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.157 23:40:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:53.157 23:40:14 -- common/autotest_common.sh@852 -- # return 0 00:26:53.157 23:40:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:53.157 23:40:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:53.157 23:40:14 -- common/autotest_common.sh@10 -- # set +x 00:26:53.157 23:40:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.157 23:40:14 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:26:53.157 23:40:14 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:53.157 23:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.157 23:40:14 -- common/autotest_common.sh@10 -- # set +x 00:26:53.157 23:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.157 23:40:14 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:53.157 23:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.157 23:40:14 -- common/autotest_common.sh@10 -- # set +x 00:26:53.415 23:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.415 23:40:14 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:53.415 23:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.415 23:40:14 -- common/autotest_common.sh@10 -- # set +x 00:26:53.415 [2024-07-11 23:40:14.182088] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.415 23:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.415 23:40:14 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:53.415 23:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.415 23:40:14 -- common/autotest_common.sh@10 -- # set +x 00:26:53.415 Malloc1 00:26:53.415 23:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.415 23:40:14 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:53.415 23:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.415 23:40:14 -- common/autotest_common.sh@10 -- # set +x 00:26:53.415 23:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.415 23:40:14 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:53.415 23:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.415 23:40:14 -- common/autotest_common.sh@10 -- # set +x 00:26:53.415 23:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.415 23:40:14 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.415 23:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.415 23:40:14 -- common/autotest_common.sh@10 -- # set +x 00:26:53.415 [2024-07-11 23:40:14.235788] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.415 23:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.415 23:40:14 -- target/perf_adq.sh@94 -- # perfpid=327972 00:26:53.415 23:40:14 -- target/perf_adq.sh@95 -- # sleep 2 00:26:53.415 23:40:14 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:53.415 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.314 23:40:16 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:26:55.314 23:40:16 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:55.314 23:40:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.314 23:40:16 -- target/perf_adq.sh@97 -- # wc -l 00:26:55.314 23:40:16 -- common/autotest_common.sh@10 -- # set +x 00:26:55.314 23:40:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.571 23:40:16 -- target/perf_adq.sh@97 -- # count=2 00:26:55.571 23:40:16 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:26:55.571 23:40:16 -- target/perf_adq.sh@103 -- # wait 327972 00:27:03.674 Initializing NVMe Controllers 00:27:03.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:03.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:03.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:03.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:03.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:03.674 Initialization complete. Launching workers. 00:27:03.674 ======================================================== 00:27:03.674 Latency(us) 00:27:03.674 Device Information : IOPS MiB/s Average min max 00:27:03.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6854.40 26.77 9337.73 1694.72 52793.98 00:27:03.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7550.70 29.49 8476.94 1487.83 53072.52 00:27:03.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8547.60 33.39 7487.46 1652.56 53184.73 00:27:03.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5998.40 23.43 10702.86 1721.80 53361.52 00:27:03.674 ======================================================== 00:27:03.674 Total : 28951.10 113.09 8849.79 1487.83 53361.52 00:27:03.674 00:27:03.674 23:40:24 -- target/perf_adq.sh@104 -- # nvmftestfini 00:27:03.674 23:40:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:03.674 23:40:24 -- nvmf/common.sh@116 -- # sync 00:27:03.674 23:40:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:03.674 23:40:24 -- nvmf/common.sh@119 -- # set +e 00:27:03.675 23:40:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:03.675 23:40:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:03.675 rmmod nvme_tcp 00:27:03.675 rmmod nvme_fabrics 00:27:03.675 rmmod nvme_keyring 00:27:03.675 23:40:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:03.675 23:40:24 -- nvmf/common.sh@123 -- # set -e 00:27:03.675 23:40:24 -- nvmf/common.sh@124 -- # return 0 00:27:03.675 23:40:24 -- nvmf/common.sh@477 -- # '[' -n 327804 ']' 00:27:03.675 23:40:24 -- nvmf/common.sh@478 -- # killprocess 327804 00:27:03.675 23:40:24 -- common/autotest_common.sh@926 -- # '[' -z 327804 ']' 00:27:03.675 23:40:24 -- common/autotest_common.sh@930 -- # kill -0 327804 00:27:03.675 23:40:24 -- common/autotest_common.sh@931 -- # uname 00:27:03.675 23:40:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:03.675 23:40:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 327804 00:27:03.675 23:40:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:03.675 23:40:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:03.675 23:40:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 327804' 00:27:03.675 killing process with pid 327804 00:27:03.675 23:40:24 -- common/autotest_common.sh@945 -- # kill 327804 00:27:03.675 23:40:24 -- common/autotest_common.sh@950 -- # wait 327804 00:27:03.934 23:40:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:03.934 23:40:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:03.934 23:40:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:03.934 23:40:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.934 23:40:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:03.934 23:40:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.934 23:40:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.934 23:40:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.222 23:40:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:07.222 23:40:27 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:27:07.222 00:27:07.222 real 0m46.989s 00:27:07.222 user 2m44.751s 00:27:07.222 sys 0m10.906s 00:27:07.222 23:40:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.222 23:40:27 -- common/autotest_common.sh@10 -- # set +x 00:27:07.222 ************************************ 00:27:07.222 END TEST nvmf_perf_adq 00:27:07.222 ************************************ 00:27:07.222 23:40:27 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:07.222 23:40:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:07.222 23:40:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:07.222 23:40:27 -- common/autotest_common.sh@10 -- # set +x 00:27:07.222 ************************************ 00:27:07.222 START TEST nvmf_shutdown 00:27:07.222 ************************************ 00:27:07.222 23:40:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:07.222 * Looking for test storage... 00:27:07.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:07.222 23:40:27 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.222 23:40:27 -- nvmf/common.sh@7 -- # uname -s 00:27:07.222 23:40:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.222 23:40:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.222 23:40:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.222 23:40:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.222 23:40:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.222 23:40:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.222 23:40:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.222 23:40:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.222 23:40:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.222 23:40:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.222 23:40:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:07.222 23:40:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:07.222 23:40:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.222 23:40:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.222 23:40:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.222 23:40:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.222 23:40:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.222 23:40:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.222 23:40:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.222 23:40:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.222 23:40:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.222 23:40:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.222 23:40:27 -- paths/export.sh@5 -- # export PATH 00:27:07.222 23:40:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.222 23:40:27 -- nvmf/common.sh@46 -- # : 0 00:27:07.222 23:40:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:07.222 23:40:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:07.222 23:40:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:07.222 23:40:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.222 23:40:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.222 23:40:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:07.222 23:40:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:07.222 23:40:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:07.222 23:40:27 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:07.222 23:40:27 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:07.222 23:40:27 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:07.222 23:40:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:07.222 23:40:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:07.222 23:40:27 -- common/autotest_common.sh@10 -- # set +x 00:27:07.222 ************************************ 00:27:07.222 START TEST nvmf_shutdown_tc1 00:27:07.222 ************************************ 00:27:07.222 23:40:27 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:27:07.222 23:40:27 -- target/shutdown.sh@74 -- # starttarget 00:27:07.222 23:40:27 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:07.222 23:40:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:07.222 23:40:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.222 23:40:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:07.222 23:40:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:07.222 23:40:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:07.222 23:40:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.222 23:40:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.222 23:40:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.222 23:40:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:07.222 23:40:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:07.222 23:40:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:07.222 23:40:27 -- common/autotest_common.sh@10 -- # set +x 00:27:09.754 23:40:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:09.754 23:40:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:09.754 23:40:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:09.754 23:40:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:09.754 23:40:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:09.754 23:40:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:09.754 23:40:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:09.754 23:40:30 -- nvmf/common.sh@294 -- # net_devs=() 00:27:09.754 23:40:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:09.754 23:40:30 -- nvmf/common.sh@295 -- # e810=() 00:27:09.754 23:40:30 -- nvmf/common.sh@295 -- # local -ga e810 00:27:09.754 23:40:30 -- nvmf/common.sh@296 -- # x722=() 00:27:09.754 23:40:30 -- nvmf/common.sh@296 -- # local -ga x722 00:27:09.754 23:40:30 -- nvmf/common.sh@297 -- # mlx=() 00:27:09.754 23:40:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:09.754 23:40:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.754 23:40:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.754 23:40:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.754 23:40:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.754 23:40:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.754 23:40:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.754 23:40:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.754 23:40:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.754 23:40:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.754 23:40:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.754 23:40:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.754 23:40:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:09.754 23:40:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:09.754 23:40:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:09.754 23:40:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:09.754 23:40:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:09.754 23:40:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:09.754 23:40:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:09.754 23:40:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:09.754 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:09.754 23:40:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:09.754 23:40:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:09.754 23:40:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.754 23:40:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.754 23:40:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:09.754 23:40:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:09.755 23:40:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:09.755 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:09.755 23:40:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:09.755 23:40:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:09.755 23:40:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.755 23:40:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.755 23:40:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:09.755 23:40:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:09.755 23:40:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:09.755 23:40:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:09.755 23:40:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:09.755 23:40:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.755 23:40:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:09.755 23:40:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.755 23:40:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:09.755 Found net devices under 0000:84:00.0: cvl_0_0 00:27:09.755 23:40:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.755 23:40:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:09.755 23:40:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.755 23:40:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:09.755 23:40:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.755 23:40:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:09.755 Found net devices under 0000:84:00.1: cvl_0_1 00:27:09.755 23:40:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.755 23:40:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:09.755 23:40:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:09.755 23:40:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:09.755 23:40:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:09.755 23:40:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:09.755 23:40:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.755 23:40:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.755 23:40:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.755 23:40:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:09.755 23:40:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.755 23:40:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.755 23:40:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:09.755 23:40:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.755 23:40:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.755 23:40:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:09.755 23:40:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:09.755 23:40:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.755 23:40:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.755 23:40:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.755 23:40:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.755 23:40:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:09.755 23:40:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.755 23:40:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.755 23:40:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.755 23:40:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:09.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:27:09.755 00:27:09.755 --- 10.0.0.2 ping statistics --- 00:27:09.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.755 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:27:09.755 23:40:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:27:09.755 00:27:09.755 --- 10.0.0.1 ping statistics --- 00:27:09.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.755 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:27:09.755 23:40:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.755 23:40:30 -- nvmf/common.sh@410 -- # return 0 00:27:09.755 23:40:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:09.755 23:40:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.755 23:40:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:09.755 23:40:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:09.755 23:40:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.755 23:40:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:09.755 23:40:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:10.013 23:40:30 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:10.013 23:40:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:10.013 23:40:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:10.013 23:40:30 -- common/autotest_common.sh@10 -- # set +x 00:27:10.013 23:40:30 -- nvmf/common.sh@469 -- # nvmfpid=331319 00:27:10.013 23:40:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:10.013 23:40:30 -- nvmf/common.sh@470 -- # waitforlisten 331319 00:27:10.013 23:40:30 -- common/autotest_common.sh@819 -- # '[' -z 331319 ']' 00:27:10.013 23:40:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.013 23:40:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:10.013 23:40:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.013 23:40:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:10.013 23:40:30 -- common/autotest_common.sh@10 -- # set +x 00:27:10.013 [2024-07-11 23:40:30.772644] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:10.013 [2024-07-11 23:40:30.772733] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.013 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.013 [2024-07-11 23:40:30.858232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.270 [2024-07-11 23:40:30.966290] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:10.270 [2024-07-11 23:40:30.966453] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.270 [2024-07-11 23:40:30.966474] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.270 [2024-07-11 23:40:30.966488] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.270 [2024-07-11 23:40:30.966544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.270 [2024-07-11 23:40:30.966590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.270 [2024-07-11 23:40:30.966641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:10.270 [2024-07-11 23:40:30.966643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.202 23:40:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:11.202 23:40:31 -- common/autotest_common.sh@852 -- # return 0 00:27:11.202 23:40:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:11.202 23:40:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:11.202 23:40:31 -- common/autotest_common.sh@10 -- # set +x 00:27:11.202 23:40:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.202 23:40:31 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:11.202 23:40:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:11.202 23:40:31 -- common/autotest_common.sh@10 -- # set +x 00:27:11.202 [2024-07-11 23:40:31.903174] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.202 23:40:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:11.202 23:40:31 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:11.202 23:40:31 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:11.202 23:40:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:11.202 23:40:31 -- common/autotest_common.sh@10 -- # set +x 00:27:11.202 23:40:31 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:11.202 23:40:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.202 23:40:31 -- target/shutdown.sh@28 -- # cat 00:27:11.202 23:40:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.202 23:40:31 -- target/shutdown.sh@28 -- # cat 00:27:11.202 23:40:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.202 23:40:31 -- target/shutdown.sh@28 -- # cat 00:27:11.202 23:40:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.202 23:40:31 -- target/shutdown.sh@28 -- # cat 00:27:11.202 23:40:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.202 23:40:31 -- target/shutdown.sh@28 -- # cat 00:27:11.202 23:40:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.202 23:40:31 -- target/shutdown.sh@28 -- # cat 00:27:11.202 23:40:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.202 23:40:31 -- target/shutdown.sh@28 -- # cat 00:27:11.202 23:40:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.202 23:40:31 -- target/shutdown.sh@28 -- # cat 00:27:11.202 23:40:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.202 23:40:31 -- target/shutdown.sh@28 -- # cat 00:27:11.202 23:40:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.202 23:40:31 -- target/shutdown.sh@28 -- # cat 00:27:11.202 23:40:31 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:11.202 23:40:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:11.202 23:40:31 -- common/autotest_common.sh@10 -- # set +x 00:27:11.202 Malloc1 00:27:11.202 [2024-07-11 23:40:31.992183] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.202 Malloc2 00:27:11.202 Malloc3 00:27:11.202 Malloc4 00:27:11.460 Malloc5 00:27:11.460 Malloc6 00:27:11.460 Malloc7 00:27:11.460 Malloc8 00:27:11.460 Malloc9 00:27:11.718 Malloc10 00:27:11.718 23:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:11.718 23:40:32 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:11.718 23:40:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:11.719 23:40:32 -- common/autotest_common.sh@10 -- # set +x 00:27:11.719 23:40:32 -- target/shutdown.sh@78 -- # perfpid=331639 00:27:11.719 23:40:32 -- target/shutdown.sh@79 -- # waitforlisten 331639 /var/tmp/bdevperf.sock 00:27:11.719 23:40:32 -- common/autotest_common.sh@819 -- # '[' -z 331639 ']' 00:27:11.719 23:40:32 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:11.719 23:40:32 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:11.719 23:40:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:11.719 23:40:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:11.719 23:40:32 -- nvmf/common.sh@520 -- # config=() 00:27:11.719 23:40:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:11.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:11.719 23:40:32 -- nvmf/common.sh@520 -- # local subsystem config 00:27:11.719 23:40:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:11.719 23:40:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:11.719 23:40:32 -- common/autotest_common.sh@10 -- # set +x 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:11.719 { 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme$subsystem", 00:27:11.719 "trtype": "$TEST_TRANSPORT", 00:27:11.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "$NVMF_PORT", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.719 "hdgst": ${hdgst:-false}, 00:27:11.719 "ddgst": ${ddgst:-false} 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 } 00:27:11.719 EOF 00:27:11.719 )") 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # cat 00:27:11.719 23:40:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:11.719 { 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme$subsystem", 00:27:11.719 "trtype": "$TEST_TRANSPORT", 00:27:11.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "$NVMF_PORT", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.719 "hdgst": ${hdgst:-false}, 00:27:11.719 "ddgst": ${ddgst:-false} 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 } 00:27:11.719 EOF 00:27:11.719 )") 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # cat 00:27:11.719 23:40:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:11.719 { 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme$subsystem", 00:27:11.719 "trtype": "$TEST_TRANSPORT", 00:27:11.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "$NVMF_PORT", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.719 "hdgst": ${hdgst:-false}, 00:27:11.719 "ddgst": ${ddgst:-false} 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 } 00:27:11.719 EOF 00:27:11.719 )") 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # cat 00:27:11.719 23:40:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:11.719 { 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme$subsystem", 00:27:11.719 "trtype": "$TEST_TRANSPORT", 00:27:11.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "$NVMF_PORT", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.719 "hdgst": ${hdgst:-false}, 00:27:11.719 "ddgst": ${ddgst:-false} 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 } 00:27:11.719 EOF 00:27:11.719 )") 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # cat 00:27:11.719 23:40:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:11.719 { 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme$subsystem", 00:27:11.719 "trtype": "$TEST_TRANSPORT", 00:27:11.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "$NVMF_PORT", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.719 "hdgst": ${hdgst:-false}, 00:27:11.719 "ddgst": ${ddgst:-false} 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 } 00:27:11.719 EOF 00:27:11.719 )") 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # cat 00:27:11.719 23:40:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:11.719 { 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme$subsystem", 00:27:11.719 "trtype": "$TEST_TRANSPORT", 00:27:11.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "$NVMF_PORT", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.719 "hdgst": ${hdgst:-false}, 00:27:11.719 "ddgst": ${ddgst:-false} 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 } 00:27:11.719 EOF 00:27:11.719 )") 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # cat 00:27:11.719 23:40:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:11.719 { 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme$subsystem", 00:27:11.719 "trtype": "$TEST_TRANSPORT", 00:27:11.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "$NVMF_PORT", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.719 "hdgst": ${hdgst:-false}, 00:27:11.719 "ddgst": ${ddgst:-false} 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 } 00:27:11.719 EOF 00:27:11.719 )") 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # cat 00:27:11.719 23:40:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:11.719 { 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme$subsystem", 00:27:11.719 "trtype": "$TEST_TRANSPORT", 00:27:11.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "$NVMF_PORT", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.719 "hdgst": ${hdgst:-false}, 00:27:11.719 "ddgst": ${ddgst:-false} 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 } 00:27:11.719 EOF 00:27:11.719 )") 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # cat 00:27:11.719 23:40:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:11.719 { 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme$subsystem", 00:27:11.719 "trtype": "$TEST_TRANSPORT", 00:27:11.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "$NVMF_PORT", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.719 "hdgst": ${hdgst:-false}, 00:27:11.719 "ddgst": ${ddgst:-false} 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 } 00:27:11.719 EOF 00:27:11.719 )") 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # cat 00:27:11.719 23:40:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:11.719 { 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme$subsystem", 00:27:11.719 "trtype": "$TEST_TRANSPORT", 00:27:11.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "$NVMF_PORT", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.719 "hdgst": ${hdgst:-false}, 00:27:11.719 "ddgst": ${ddgst:-false} 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 } 00:27:11.719 EOF 00:27:11.719 )") 00:27:11.719 23:40:32 -- nvmf/common.sh@542 -- # cat 00:27:11.719 23:40:32 -- nvmf/common.sh@544 -- # jq . 00:27:11.719 23:40:32 -- nvmf/common.sh@545 -- # IFS=, 00:27:11.719 23:40:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme1", 00:27:11.719 "trtype": "tcp", 00:27:11.719 "traddr": "10.0.0.2", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "4420", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:11.719 "hdgst": false, 00:27:11.719 "ddgst": false 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 },{ 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme2", 00:27:11.719 "trtype": "tcp", 00:27:11.719 "traddr": "10.0.0.2", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "4420", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:11.719 "hdgst": false, 00:27:11.719 "ddgst": false 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 },{ 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme3", 00:27:11.719 "trtype": "tcp", 00:27:11.719 "traddr": "10.0.0.2", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "4420", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:11.719 "hdgst": false, 00:27:11.719 "ddgst": false 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 },{ 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme4", 00:27:11.719 "trtype": "tcp", 00:27:11.719 "traddr": "10.0.0.2", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "4420", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:11.719 "hdgst": false, 00:27:11.719 "ddgst": false 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 },{ 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme5", 00:27:11.719 "trtype": "tcp", 00:27:11.719 "traddr": "10.0.0.2", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "4420", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:11.719 "hdgst": false, 00:27:11.719 "ddgst": false 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 },{ 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme6", 00:27:11.719 "trtype": "tcp", 00:27:11.719 "traddr": "10.0.0.2", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "4420", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:11.719 "hdgst": false, 00:27:11.719 "ddgst": false 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 },{ 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme7", 00:27:11.719 "trtype": "tcp", 00:27:11.719 "traddr": "10.0.0.2", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "4420", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:11.719 "hdgst": false, 00:27:11.719 "ddgst": false 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 },{ 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme8", 00:27:11.719 "trtype": "tcp", 00:27:11.719 "traddr": "10.0.0.2", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "4420", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:11.719 "hdgst": false, 00:27:11.719 "ddgst": false 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 },{ 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme9", 00:27:11.719 "trtype": "tcp", 00:27:11.719 "traddr": "10.0.0.2", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "4420", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:11.719 "hdgst": false, 00:27:11.719 "ddgst": false 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 },{ 00:27:11.719 "params": { 00:27:11.719 "name": "Nvme10", 00:27:11.719 "trtype": "tcp", 00:27:11.719 "traddr": "10.0.0.2", 00:27:11.719 "adrfam": "ipv4", 00:27:11.719 "trsvcid": "4420", 00:27:11.719 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:11.719 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:11.719 "hdgst": false, 00:27:11.719 "ddgst": false 00:27:11.719 }, 00:27:11.719 "method": "bdev_nvme_attach_controller" 00:27:11.719 }' 00:27:11.719 [2024-07-11 23:40:32.516519] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:11.719 [2024-07-11 23:40:32.516605] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:11.719 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.719 [2024-07-11 23:40:32.586397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.976 [2024-07-11 23:40:32.671688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.889 23:40:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:13.889 23:40:34 -- common/autotest_common.sh@852 -- # return 0 00:27:13.889 23:40:34 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:13.889 23:40:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:13.889 23:40:34 -- common/autotest_common.sh@10 -- # set +x 00:27:13.889 23:40:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:13.889 23:40:34 -- target/shutdown.sh@83 -- # kill -9 331639 00:27:13.889 23:40:34 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:13.889 23:40:34 -- target/shutdown.sh@87 -- # sleep 1 00:27:14.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 331639 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:14.822 23:40:35 -- target/shutdown.sh@88 -- # kill -0 331319 00:27:14.822 23:40:35 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:14.822 23:40:35 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:14.822 23:40:35 -- nvmf/common.sh@520 -- # config=() 00:27:14.822 23:40:35 -- nvmf/common.sh@520 -- # local subsystem config 00:27:14.822 23:40:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:14.822 23:40:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:14.822 { 00:27:14.822 "params": { 00:27:14.822 "name": "Nvme$subsystem", 00:27:14.822 "trtype": "$TEST_TRANSPORT", 00:27:14.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.822 "adrfam": "ipv4", 00:27:14.822 "trsvcid": "$NVMF_PORT", 00:27:14.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.822 "hdgst": ${hdgst:-false}, 00:27:14.822 "ddgst": ${ddgst:-false} 00:27:14.822 }, 00:27:14.822 "method": "bdev_nvme_attach_controller" 00:27:14.822 } 00:27:14.822 EOF 00:27:14.822 )") 00:27:14.822 23:40:35 -- nvmf/common.sh@542 -- # cat 00:27:14.822 23:40:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:14.822 23:40:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:14.822 { 00:27:14.822 "params": { 00:27:14.822 "name": "Nvme$subsystem", 00:27:14.822 "trtype": "$TEST_TRANSPORT", 00:27:14.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.822 "adrfam": "ipv4", 00:27:14.822 "trsvcid": "$NVMF_PORT", 00:27:14.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.822 "hdgst": ${hdgst:-false}, 00:27:14.822 "ddgst": ${ddgst:-false} 00:27:14.822 }, 00:27:14.822 "method": "bdev_nvme_attach_controller" 00:27:14.822 } 00:27:14.822 EOF 00:27:14.822 )") 00:27:14.822 23:40:35 -- nvmf/common.sh@542 -- # cat 00:27:14.822 23:40:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:14.822 23:40:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:14.822 { 00:27:14.822 "params": { 00:27:14.822 "name": "Nvme$subsystem", 00:27:14.822 "trtype": "$TEST_TRANSPORT", 00:27:14.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.822 "adrfam": "ipv4", 00:27:14.822 "trsvcid": "$NVMF_PORT", 00:27:14.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.822 "hdgst": ${hdgst:-false}, 00:27:14.822 "ddgst": ${ddgst:-false} 00:27:14.822 }, 00:27:14.822 "method": "bdev_nvme_attach_controller" 00:27:14.822 } 00:27:14.822 EOF 00:27:14.822 )") 00:27:14.822 23:40:35 -- nvmf/common.sh@542 -- # cat 00:27:14.822 23:40:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:14.822 23:40:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:14.822 { 00:27:14.822 "params": { 00:27:14.822 "name": "Nvme$subsystem", 00:27:14.822 "trtype": "$TEST_TRANSPORT", 00:27:14.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.822 "adrfam": "ipv4", 00:27:14.822 "trsvcid": "$NVMF_PORT", 00:27:14.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.822 "hdgst": ${hdgst:-false}, 00:27:14.822 "ddgst": ${ddgst:-false} 00:27:14.822 }, 00:27:14.822 "method": "bdev_nvme_attach_controller" 00:27:14.822 } 00:27:14.822 EOF 00:27:14.822 )") 00:27:14.822 23:40:35 -- nvmf/common.sh@542 -- # cat 00:27:14.822 23:40:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:14.822 23:40:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:14.822 { 00:27:14.822 "params": { 00:27:14.822 "name": "Nvme$subsystem", 00:27:14.822 "trtype": "$TEST_TRANSPORT", 00:27:14.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.822 "adrfam": "ipv4", 00:27:14.822 "trsvcid": "$NVMF_PORT", 00:27:14.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.822 "hdgst": ${hdgst:-false}, 00:27:14.822 "ddgst": ${ddgst:-false} 00:27:14.822 }, 00:27:14.822 "method": "bdev_nvme_attach_controller" 00:27:14.822 } 00:27:14.822 EOF 00:27:14.822 )") 00:27:14.822 23:40:35 -- nvmf/common.sh@542 -- # cat 00:27:14.822 23:40:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:14.822 23:40:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:14.822 { 00:27:14.822 "params": { 00:27:14.822 "name": "Nvme$subsystem", 00:27:14.822 "trtype": "$TEST_TRANSPORT", 00:27:14.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.822 "adrfam": "ipv4", 00:27:14.822 "trsvcid": "$NVMF_PORT", 00:27:14.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.822 "hdgst": ${hdgst:-false}, 00:27:14.822 "ddgst": ${ddgst:-false} 00:27:14.822 }, 00:27:14.822 "method": "bdev_nvme_attach_controller" 00:27:14.822 } 00:27:14.822 EOF 00:27:14.822 )") 00:27:14.822 23:40:35 -- nvmf/common.sh@542 -- # cat 00:27:14.822 23:40:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:14.822 23:40:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:14.822 { 00:27:14.822 "params": { 00:27:14.822 "name": "Nvme$subsystem", 00:27:14.822 "trtype": "$TEST_TRANSPORT", 00:27:14.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.822 "adrfam": "ipv4", 00:27:14.822 "trsvcid": "$NVMF_PORT", 00:27:14.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.822 "hdgst": ${hdgst:-false}, 00:27:14.822 "ddgst": ${ddgst:-false} 00:27:14.822 }, 00:27:14.822 "method": "bdev_nvme_attach_controller" 00:27:14.822 } 00:27:14.822 EOF 00:27:14.822 )") 00:27:14.822 23:40:35 -- nvmf/common.sh@542 -- # cat 00:27:14.822 23:40:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:14.823 23:40:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:14.823 { 00:27:14.823 "params": { 00:27:14.823 "name": "Nvme$subsystem", 00:27:14.823 "trtype": "$TEST_TRANSPORT", 00:27:14.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.823 "adrfam": "ipv4", 00:27:14.823 "trsvcid": "$NVMF_PORT", 00:27:14.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.823 "hdgst": ${hdgst:-false}, 00:27:14.823 "ddgst": ${ddgst:-false} 00:27:14.823 }, 00:27:14.823 "method": "bdev_nvme_attach_controller" 00:27:14.823 } 00:27:14.823 EOF 00:27:14.823 )") 00:27:14.823 23:40:35 -- nvmf/common.sh@542 -- # cat 00:27:14.823 23:40:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:14.823 23:40:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:14.823 { 00:27:14.823 "params": { 00:27:14.823 "name": "Nvme$subsystem", 00:27:14.823 "trtype": "$TEST_TRANSPORT", 00:27:14.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.823 "adrfam": "ipv4", 00:27:14.823 "trsvcid": "$NVMF_PORT", 00:27:14.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.823 "hdgst": ${hdgst:-false}, 00:27:14.823 "ddgst": ${ddgst:-false} 00:27:14.823 }, 00:27:14.823 "method": "bdev_nvme_attach_controller" 00:27:14.823 } 00:27:14.823 EOF 00:27:14.823 )") 00:27:14.823 23:40:35 -- nvmf/common.sh@542 -- # cat 00:27:14.823 23:40:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:14.823 23:40:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:14.823 { 00:27:14.823 "params": { 00:27:14.823 "name": "Nvme$subsystem", 00:27:14.823 "trtype": "$TEST_TRANSPORT", 00:27:14.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.823 "adrfam": "ipv4", 00:27:14.823 "trsvcid": "$NVMF_PORT", 00:27:14.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.823 "hdgst": ${hdgst:-false}, 00:27:14.823 "ddgst": ${ddgst:-false} 00:27:14.823 }, 00:27:14.823 "method": "bdev_nvme_attach_controller" 00:27:14.823 } 00:27:14.823 EOF 00:27:14.823 )") 00:27:14.823 23:40:35 -- nvmf/common.sh@542 -- # cat 00:27:14.823 23:40:35 -- nvmf/common.sh@544 -- # jq . 00:27:14.823 23:40:35 -- nvmf/common.sh@545 -- # IFS=, 00:27:14.823 23:40:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:14.823 "params": { 00:27:14.823 "name": "Nvme1", 00:27:14.823 "trtype": "tcp", 00:27:14.823 "traddr": "10.0.0.2", 00:27:14.823 "adrfam": "ipv4", 00:27:14.823 "trsvcid": "4420", 00:27:14.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:14.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:14.823 "hdgst": false, 00:27:14.823 "ddgst": false 00:27:14.823 }, 00:27:14.823 "method": "bdev_nvme_attach_controller" 00:27:14.823 },{ 00:27:14.823 "params": { 00:27:14.823 "name": "Nvme2", 00:27:14.823 "trtype": "tcp", 00:27:14.823 "traddr": "10.0.0.2", 00:27:14.823 "adrfam": "ipv4", 00:27:14.823 "trsvcid": "4420", 00:27:14.823 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:14.823 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:14.823 "hdgst": false, 00:27:14.823 "ddgst": false 00:27:14.823 }, 00:27:14.823 "method": "bdev_nvme_attach_controller" 00:27:14.823 },{ 00:27:14.823 "params": { 00:27:14.823 "name": "Nvme3", 00:27:14.823 "trtype": "tcp", 00:27:14.823 "traddr": "10.0.0.2", 00:27:14.823 "adrfam": "ipv4", 00:27:14.823 "trsvcid": "4420", 00:27:14.823 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:14.823 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:14.823 "hdgst": false, 00:27:14.823 "ddgst": false 00:27:14.823 }, 00:27:14.823 "method": "bdev_nvme_attach_controller" 00:27:14.823 },{ 00:27:14.823 "params": { 00:27:14.823 "name": "Nvme4", 00:27:14.823 "trtype": "tcp", 00:27:14.823 "traddr": "10.0.0.2", 00:27:14.823 "adrfam": "ipv4", 00:27:14.823 "trsvcid": "4420", 00:27:14.823 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:14.823 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:14.823 "hdgst": false, 00:27:14.823 "ddgst": false 00:27:14.823 }, 00:27:14.823 "method": "bdev_nvme_attach_controller" 00:27:14.823 },{ 00:27:14.823 "params": { 00:27:14.823 "name": "Nvme5", 00:27:14.823 "trtype": "tcp", 00:27:14.823 "traddr": "10.0.0.2", 00:27:14.823 "adrfam": "ipv4", 00:27:14.823 "trsvcid": "4420", 00:27:14.823 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:14.823 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:14.823 "hdgst": false, 00:27:14.823 "ddgst": false 00:27:14.823 }, 00:27:14.823 "method": "bdev_nvme_attach_controller" 00:27:14.823 },{ 00:27:14.823 "params": { 00:27:14.823 "name": "Nvme6", 00:27:14.823 "trtype": "tcp", 00:27:14.823 "traddr": "10.0.0.2", 00:27:14.823 "adrfam": "ipv4", 00:27:14.823 "trsvcid": "4420", 00:27:14.823 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:14.823 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:14.823 "hdgst": false, 00:27:14.823 "ddgst": false 00:27:14.823 }, 00:27:14.823 "method": "bdev_nvme_attach_controller" 00:27:14.823 },{ 00:27:14.823 "params": { 00:27:14.823 "name": "Nvme7", 00:27:14.823 "trtype": "tcp", 00:27:14.823 "traddr": "10.0.0.2", 00:27:14.823 "adrfam": "ipv4", 00:27:14.823 "trsvcid": "4420", 00:27:14.823 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:14.823 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:14.823 "hdgst": false, 00:27:14.823 "ddgst": false 00:27:14.823 }, 00:27:14.823 "method": "bdev_nvme_attach_controller" 00:27:14.823 },{ 00:27:14.823 "params": { 00:27:14.823 "name": "Nvme8", 00:27:14.823 "trtype": "tcp", 00:27:14.823 "traddr": "10.0.0.2", 00:27:14.823 "adrfam": "ipv4", 00:27:14.823 "trsvcid": "4420", 00:27:14.823 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:14.823 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:14.823 "hdgst": false, 00:27:14.823 "ddgst": false 00:27:14.823 }, 00:27:14.823 "method": "bdev_nvme_attach_controller" 00:27:14.823 },{ 00:27:14.823 "params": { 00:27:14.823 "name": "Nvme9", 00:27:14.823 "trtype": "tcp", 00:27:14.823 "traddr": "10.0.0.2", 00:27:14.823 "adrfam": "ipv4", 00:27:14.823 "trsvcid": "4420", 00:27:14.823 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:14.823 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:14.823 "hdgst": false, 00:27:14.823 "ddgst": false 00:27:14.823 }, 00:27:14.823 "method": "bdev_nvme_attach_controller" 00:27:14.823 },{ 00:27:14.823 "params": { 00:27:14.823 "name": "Nvme10", 00:27:14.823 "trtype": "tcp", 00:27:14.823 "traddr": "10.0.0.2", 00:27:14.823 "adrfam": "ipv4", 00:27:14.823 "trsvcid": "4420", 00:27:14.824 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:14.824 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:14.824 "hdgst": false, 00:27:14.824 "ddgst": false 00:27:14.824 }, 00:27:14.824 "method": "bdev_nvme_attach_controller" 00:27:14.824 }' 00:27:14.824 [2024-07-11 23:40:35.613752] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:14.824 [2024-07-11 23:40:35.613838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332068 ] 00:27:14.824 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.824 [2024-07-11 23:40:35.684948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.081 [2024-07-11 23:40:35.772763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.451 Running I/O for 1 seconds... 00:27:17.824 00:27:17.824 Latency(us) 00:27:17.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.824 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.824 Verification LBA range: start 0x0 length 0x400 00:27:17.824 Nvme1n1 : 1.08 368.63 23.04 0.00 0.00 168452.28 46797.56 149130.81 00:27:17.824 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.824 Verification LBA range: start 0x0 length 0x400 00:27:17.824 Nvme2n1 : 1.07 371.25 23.20 0.00 0.00 164974.10 57865.86 125052.40 00:27:17.824 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.824 Verification LBA range: start 0x0 length 0x400 00:27:17.824 Nvme3n1 : 1.11 392.69 24.54 0.00 0.00 158516.70 13204.29 121945.51 00:27:17.824 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.824 Verification LBA range: start 0x0 length 0x400 00:27:17.824 Nvme4n1 : 1.11 391.34 24.46 0.00 0.00 157784.72 14175.19 120392.06 00:27:17.824 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.824 Verification LBA range: start 0x0 length 0x400 00:27:17.824 Nvme5n1 : 1.08 367.14 22.95 0.00 0.00 164361.80 42331.40 125052.40 00:27:17.824 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.824 Verification LBA range: start 0x0 length 0x400 00:27:17.824 Nvme6n1 : 1.09 366.20 22.89 0.00 0.00 163710.95 46215.02 125052.40 00:27:17.824 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.824 Verification LBA range: start 0x0 length 0x400 00:27:17.824 Nvme7n1 : 1.11 390.09 24.38 0.00 0.00 155498.00 10679.94 128159.29 00:27:17.824 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.824 Verification LBA range: start 0x0 length 0x400 00:27:17.824 Nvme8n1 : 1.12 388.59 24.29 0.00 0.00 155132.12 9903.22 132819.63 00:27:17.824 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.824 Verification LBA range: start 0x0 length 0x400 00:27:17.824 Nvme9n1 : 1.10 362.88 22.68 0.00 0.00 163376.26 28350.39 142917.03 00:27:17.824 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.824 Verification LBA range: start 0x0 length 0x400 00:27:17.824 Nvme10n1 : 1.11 396.55 24.78 0.00 0.00 149886.39 9806.13 127382.57 00:27:17.824 =================================================================================================================== 00:27:17.824 Total : 3795.36 237.21 0.00 0.00 159942.05 9806.13 149130.81 00:27:17.824 23:40:38 -- target/shutdown.sh@93 -- # stoptarget 00:27:17.824 23:40:38 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:17.824 23:40:38 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:17.824 23:40:38 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:17.824 23:40:38 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:17.824 23:40:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:17.824 23:40:38 -- nvmf/common.sh@116 -- # sync 00:27:17.824 23:40:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:17.824 23:40:38 -- nvmf/common.sh@119 -- # set +e 00:27:17.824 23:40:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:17.824 23:40:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:17.824 rmmod nvme_tcp 00:27:17.824 rmmod nvme_fabrics 00:27:17.824 rmmod nvme_keyring 00:27:17.824 23:40:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:17.824 23:40:38 -- nvmf/common.sh@123 -- # set -e 00:27:17.824 23:40:38 -- nvmf/common.sh@124 -- # return 0 00:27:17.824 23:40:38 -- nvmf/common.sh@477 -- # '[' -n 331319 ']' 00:27:17.824 23:40:38 -- nvmf/common.sh@478 -- # killprocess 331319 00:27:17.824 23:40:38 -- common/autotest_common.sh@926 -- # '[' -z 331319 ']' 00:27:17.824 23:40:38 -- common/autotest_common.sh@930 -- # kill -0 331319 00:27:17.824 23:40:38 -- common/autotest_common.sh@931 -- # uname 00:27:17.824 23:40:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:17.824 23:40:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 331319 00:27:18.081 23:40:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:18.081 23:40:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:18.082 23:40:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 331319' 00:27:18.082 killing process with pid 331319 00:27:18.082 23:40:38 -- common/autotest_common.sh@945 -- # kill 331319 00:27:18.082 23:40:38 -- common/autotest_common.sh@950 -- # wait 331319 00:27:18.648 23:40:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:18.648 23:40:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:18.648 23:40:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:18.648 23:40:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.648 23:40:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:18.648 23:40:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.648 23:40:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.648 23:40:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.555 23:40:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:20.555 00:27:20.555 real 0m13.447s 00:27:20.555 user 0m39.130s 00:27:20.555 sys 0m3.893s 00:27:20.555 23:40:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.555 23:40:41 -- common/autotest_common.sh@10 -- # set +x 00:27:20.555 ************************************ 00:27:20.555 END TEST nvmf_shutdown_tc1 00:27:20.555 ************************************ 00:27:20.555 23:40:41 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:20.555 23:40:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:20.555 23:40:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:20.555 23:40:41 -- common/autotest_common.sh@10 -- # set +x 00:27:20.555 ************************************ 00:27:20.555 START TEST nvmf_shutdown_tc2 00:27:20.555 ************************************ 00:27:20.555 23:40:41 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:27:20.555 23:40:41 -- target/shutdown.sh@98 -- # starttarget 00:27:20.555 23:40:41 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:20.555 23:40:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:20.555 23:40:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.555 23:40:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:20.555 23:40:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:20.555 23:40:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:20.555 23:40:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.555 23:40:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.555 23:40:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.555 23:40:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:20.555 23:40:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:20.555 23:40:41 -- common/autotest_common.sh@10 -- # set +x 00:27:20.555 23:40:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:20.555 23:40:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:20.555 23:40:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:20.555 23:40:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:20.555 23:40:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:20.555 23:40:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:20.555 23:40:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:20.555 23:40:41 -- nvmf/common.sh@294 -- # net_devs=() 00:27:20.555 23:40:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:20.555 23:40:41 -- nvmf/common.sh@295 -- # e810=() 00:27:20.555 23:40:41 -- nvmf/common.sh@295 -- # local -ga e810 00:27:20.555 23:40:41 -- nvmf/common.sh@296 -- # x722=() 00:27:20.555 23:40:41 -- nvmf/common.sh@296 -- # local -ga x722 00:27:20.555 23:40:41 -- nvmf/common.sh@297 -- # mlx=() 00:27:20.555 23:40:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:20.555 23:40:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.555 23:40:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.555 23:40:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.555 23:40:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.555 23:40:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.555 23:40:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.555 23:40:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.555 23:40:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.555 23:40:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.555 23:40:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.555 23:40:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.555 23:40:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:20.555 23:40:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:20.555 23:40:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:20.555 23:40:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:20.555 23:40:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:20.555 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:20.555 23:40:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:20.555 23:40:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:20.555 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:20.555 23:40:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:20.555 23:40:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:20.555 23:40:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.555 23:40:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:20.555 23:40:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.555 23:40:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:20.555 Found net devices under 0000:84:00.0: cvl_0_0 00:27:20.555 23:40:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.555 23:40:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:20.555 23:40:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.555 23:40:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:20.555 23:40:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.555 23:40:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:20.555 Found net devices under 0000:84:00.1: cvl_0_1 00:27:20.555 23:40:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.555 23:40:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:20.555 23:40:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:20.555 23:40:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:20.555 23:40:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:20.555 23:40:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.555 23:40:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.555 23:40:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.555 23:40:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:20.555 23:40:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.555 23:40:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.555 23:40:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:20.555 23:40:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.555 23:40:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.555 23:40:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:20.555 23:40:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:20.555 23:40:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.555 23:40:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.555 23:40:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.555 23:40:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.815 23:40:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:20.815 23:40:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.815 23:40:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.815 23:40:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.815 23:40:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:20.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:27:20.815 00:27:20.815 --- 10.0.0.2 ping statistics --- 00:27:20.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.815 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:27:20.815 23:40:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:27:20.815 00:27:20.815 --- 10.0.0.1 ping statistics --- 00:27:20.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.815 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:27:20.815 23:40:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.815 23:40:41 -- nvmf/common.sh@410 -- # return 0 00:27:20.815 23:40:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:20.815 23:40:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.815 23:40:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:20.815 23:40:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:20.815 23:40:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.815 23:40:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:20.815 23:40:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:20.815 23:40:41 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:20.815 23:40:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:20.815 23:40:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:20.815 23:40:41 -- common/autotest_common.sh@10 -- # set +x 00:27:20.815 23:40:41 -- nvmf/common.sh@469 -- # nvmfpid=332856 00:27:20.815 23:40:41 -- nvmf/common.sh@470 -- # waitforlisten 332856 00:27:20.815 23:40:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:20.815 23:40:41 -- common/autotest_common.sh@819 -- # '[' -z 332856 ']' 00:27:20.815 23:40:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.815 23:40:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:20.815 23:40:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.815 23:40:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:20.815 23:40:41 -- common/autotest_common.sh@10 -- # set +x 00:27:20.815 [2024-07-11 23:40:41.658398] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:20.815 [2024-07-11 23:40:41.658497] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.815 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.815 [2024-07-11 23:40:41.744216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:21.073 [2024-07-11 23:40:41.853469] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:21.073 [2024-07-11 23:40:41.853645] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.073 [2024-07-11 23:40:41.853665] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.073 [2024-07-11 23:40:41.853680] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.073 [2024-07-11 23:40:41.853769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.073 [2024-07-11 23:40:41.853824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.073 [2024-07-11 23:40:41.853876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:21.073 [2024-07-11 23:40:41.853878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.006 23:40:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:22.006 23:40:42 -- common/autotest_common.sh@852 -- # return 0 00:27:22.006 23:40:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:22.006 23:40:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:22.006 23:40:42 -- common/autotest_common.sh@10 -- # set +x 00:27:22.006 23:40:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.006 23:40:42 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:22.006 23:40:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:22.006 23:40:42 -- common/autotest_common.sh@10 -- # set +x 00:27:22.006 [2024-07-11 23:40:42.770025] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.006 23:40:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:22.006 23:40:42 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:22.006 23:40:42 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:22.006 23:40:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:22.006 23:40:42 -- common/autotest_common.sh@10 -- # set +x 00:27:22.006 23:40:42 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:22.006 23:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.006 23:40:42 -- target/shutdown.sh@28 -- # cat 00:27:22.006 23:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.006 23:40:42 -- target/shutdown.sh@28 -- # cat 00:27:22.006 23:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.006 23:40:42 -- target/shutdown.sh@28 -- # cat 00:27:22.006 23:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.006 23:40:42 -- target/shutdown.sh@28 -- # cat 00:27:22.006 23:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.006 23:40:42 -- target/shutdown.sh@28 -- # cat 00:27:22.006 23:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.006 23:40:42 -- target/shutdown.sh@28 -- # cat 00:27:22.006 23:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.006 23:40:42 -- target/shutdown.sh@28 -- # cat 00:27:22.006 23:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.007 23:40:42 -- target/shutdown.sh@28 -- # cat 00:27:22.007 23:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.007 23:40:42 -- target/shutdown.sh@28 -- # cat 00:27:22.007 23:40:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.007 23:40:42 -- target/shutdown.sh@28 -- # cat 00:27:22.007 23:40:42 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:22.007 23:40:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:22.007 23:40:42 -- common/autotest_common.sh@10 -- # set +x 00:27:22.007 Malloc1 00:27:22.007 [2024-07-11 23:40:42.864459] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.007 Malloc2 00:27:22.007 Malloc3 00:27:22.274 Malloc4 00:27:22.274 Malloc5 00:27:22.274 Malloc6 00:27:22.274 Malloc7 00:27:22.274 Malloc8 00:27:22.538 Malloc9 00:27:22.538 Malloc10 00:27:22.538 23:40:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:22.538 23:40:43 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:22.538 23:40:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:22.538 23:40:43 -- common/autotest_common.sh@10 -- # set +x 00:27:22.538 23:40:43 -- target/shutdown.sh@102 -- # perfpid=333045 00:27:22.538 23:40:43 -- target/shutdown.sh@103 -- # waitforlisten 333045 /var/tmp/bdevperf.sock 00:27:22.538 23:40:43 -- common/autotest_common.sh@819 -- # '[' -z 333045 ']' 00:27:22.538 23:40:43 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:22.538 23:40:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:22.538 23:40:43 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:22.538 23:40:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:22.538 23:40:43 -- nvmf/common.sh@520 -- # config=() 00:27:22.538 23:40:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:22.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:22.538 23:40:43 -- nvmf/common.sh@520 -- # local subsystem config 00:27:22.538 23:40:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:22.538 23:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:22.538 23:40:43 -- common/autotest_common.sh@10 -- # set +x 00:27:22.538 23:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:22.538 { 00:27:22.538 "params": { 00:27:22.538 "name": "Nvme$subsystem", 00:27:22.538 "trtype": "$TEST_TRANSPORT", 00:27:22.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.538 "adrfam": "ipv4", 00:27:22.538 "trsvcid": "$NVMF_PORT", 00:27:22.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.538 "hdgst": ${hdgst:-false}, 00:27:22.538 "ddgst": ${ddgst:-false} 00:27:22.538 }, 00:27:22.538 "method": "bdev_nvme_attach_controller" 00:27:22.538 } 00:27:22.538 EOF 00:27:22.538 )") 00:27:22.538 23:40:43 -- nvmf/common.sh@542 -- # cat 00:27:22.538 23:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:22.538 23:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:22.538 { 00:27:22.538 "params": { 00:27:22.538 "name": "Nvme$subsystem", 00:27:22.538 "trtype": "$TEST_TRANSPORT", 00:27:22.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.538 "adrfam": "ipv4", 00:27:22.538 "trsvcid": "$NVMF_PORT", 00:27:22.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.538 "hdgst": ${hdgst:-false}, 00:27:22.538 "ddgst": ${ddgst:-false} 00:27:22.538 }, 00:27:22.538 "method": "bdev_nvme_attach_controller" 00:27:22.538 } 00:27:22.538 EOF 00:27:22.538 )") 00:27:22.538 23:40:43 -- nvmf/common.sh@542 -- # cat 00:27:22.538 23:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:22.538 23:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:22.538 { 00:27:22.538 "params": { 00:27:22.538 "name": "Nvme$subsystem", 00:27:22.538 "trtype": "$TEST_TRANSPORT", 00:27:22.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.538 "adrfam": "ipv4", 00:27:22.538 "trsvcid": "$NVMF_PORT", 00:27:22.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.538 "hdgst": ${hdgst:-false}, 00:27:22.538 "ddgst": ${ddgst:-false} 00:27:22.538 }, 00:27:22.538 "method": "bdev_nvme_attach_controller" 00:27:22.538 } 00:27:22.538 EOF 00:27:22.538 )") 00:27:22.538 23:40:43 -- nvmf/common.sh@542 -- # cat 00:27:22.538 23:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:22.538 23:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:22.538 { 00:27:22.538 "params": { 00:27:22.538 "name": "Nvme$subsystem", 00:27:22.538 "trtype": "$TEST_TRANSPORT", 00:27:22.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.538 "adrfam": "ipv4", 00:27:22.538 "trsvcid": "$NVMF_PORT", 00:27:22.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.538 "hdgst": ${hdgst:-false}, 00:27:22.538 "ddgst": ${ddgst:-false} 00:27:22.538 }, 00:27:22.538 "method": "bdev_nvme_attach_controller" 00:27:22.538 } 00:27:22.538 EOF 00:27:22.538 )") 00:27:22.538 23:40:43 -- nvmf/common.sh@542 -- # cat 00:27:22.538 23:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:22.539 23:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:22.539 { 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme$subsystem", 00:27:22.539 "trtype": "$TEST_TRANSPORT", 00:27:22.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "$NVMF_PORT", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.539 "hdgst": ${hdgst:-false}, 00:27:22.539 "ddgst": ${ddgst:-false} 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 } 00:27:22.539 EOF 00:27:22.539 )") 00:27:22.539 23:40:43 -- nvmf/common.sh@542 -- # cat 00:27:22.539 23:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:22.539 23:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:22.539 { 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme$subsystem", 00:27:22.539 "trtype": "$TEST_TRANSPORT", 00:27:22.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "$NVMF_PORT", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.539 "hdgst": ${hdgst:-false}, 00:27:22.539 "ddgst": ${ddgst:-false} 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 } 00:27:22.539 EOF 00:27:22.539 )") 00:27:22.539 23:40:43 -- nvmf/common.sh@542 -- # cat 00:27:22.539 23:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:22.539 23:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:22.539 { 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme$subsystem", 00:27:22.539 "trtype": "$TEST_TRANSPORT", 00:27:22.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "$NVMF_PORT", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.539 "hdgst": ${hdgst:-false}, 00:27:22.539 "ddgst": ${ddgst:-false} 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 } 00:27:22.539 EOF 00:27:22.539 )") 00:27:22.539 23:40:43 -- nvmf/common.sh@542 -- # cat 00:27:22.539 23:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:22.539 23:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:22.539 { 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme$subsystem", 00:27:22.539 "trtype": "$TEST_TRANSPORT", 00:27:22.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "$NVMF_PORT", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.539 "hdgst": ${hdgst:-false}, 00:27:22.539 "ddgst": ${ddgst:-false} 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 } 00:27:22.539 EOF 00:27:22.539 )") 00:27:22.539 23:40:43 -- nvmf/common.sh@542 -- # cat 00:27:22.539 23:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:22.539 23:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:22.539 { 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme$subsystem", 00:27:22.539 "trtype": "$TEST_TRANSPORT", 00:27:22.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "$NVMF_PORT", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.539 "hdgst": ${hdgst:-false}, 00:27:22.539 "ddgst": ${ddgst:-false} 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 } 00:27:22.539 EOF 00:27:22.539 )") 00:27:22.539 23:40:43 -- nvmf/common.sh@542 -- # cat 00:27:22.539 23:40:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:22.539 23:40:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:22.539 { 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme$subsystem", 00:27:22.539 "trtype": "$TEST_TRANSPORT", 00:27:22.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "$NVMF_PORT", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.539 "hdgst": ${hdgst:-false}, 00:27:22.539 "ddgst": ${ddgst:-false} 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 } 00:27:22.539 EOF 00:27:22.539 )") 00:27:22.539 23:40:43 -- nvmf/common.sh@542 -- # cat 00:27:22.539 23:40:43 -- nvmf/common.sh@544 -- # jq . 00:27:22.539 23:40:43 -- nvmf/common.sh@545 -- # IFS=, 00:27:22.539 23:40:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme1", 00:27:22.539 "trtype": "tcp", 00:27:22.539 "traddr": "10.0.0.2", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "4420", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:22.539 "hdgst": false, 00:27:22.539 "ddgst": false 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 },{ 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme2", 00:27:22.539 "trtype": "tcp", 00:27:22.539 "traddr": "10.0.0.2", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "4420", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:22.539 "hdgst": false, 00:27:22.539 "ddgst": false 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 },{ 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme3", 00:27:22.539 "trtype": "tcp", 00:27:22.539 "traddr": "10.0.0.2", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "4420", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:22.539 "hdgst": false, 00:27:22.539 "ddgst": false 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 },{ 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme4", 00:27:22.539 "trtype": "tcp", 00:27:22.539 "traddr": "10.0.0.2", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "4420", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:22.539 "hdgst": false, 00:27:22.539 "ddgst": false 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 },{ 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme5", 00:27:22.539 "trtype": "tcp", 00:27:22.539 "traddr": "10.0.0.2", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "4420", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:22.539 "hdgst": false, 00:27:22.539 "ddgst": false 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 },{ 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme6", 00:27:22.539 "trtype": "tcp", 00:27:22.539 "traddr": "10.0.0.2", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "4420", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:22.539 "hdgst": false, 00:27:22.539 "ddgst": false 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 },{ 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme7", 00:27:22.539 "trtype": "tcp", 00:27:22.539 "traddr": "10.0.0.2", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "4420", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:22.539 "hdgst": false, 00:27:22.539 "ddgst": false 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 },{ 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme8", 00:27:22.539 "trtype": "tcp", 00:27:22.539 "traddr": "10.0.0.2", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "4420", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:22.539 "hdgst": false, 00:27:22.539 "ddgst": false 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 },{ 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme9", 00:27:22.539 "trtype": "tcp", 00:27:22.539 "traddr": "10.0.0.2", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "4420", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:22.539 "hdgst": false, 00:27:22.539 "ddgst": false 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 },{ 00:27:22.539 "params": { 00:27:22.539 "name": "Nvme10", 00:27:22.539 "trtype": "tcp", 00:27:22.539 "traddr": "10.0.0.2", 00:27:22.539 "adrfam": "ipv4", 00:27:22.539 "trsvcid": "4420", 00:27:22.539 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:22.539 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:22.539 "hdgst": false, 00:27:22.539 "ddgst": false 00:27:22.539 }, 00:27:22.539 "method": "bdev_nvme_attach_controller" 00:27:22.539 }' 00:27:22.539 [2024-07-11 23:40:43.393420] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:22.539 [2024-07-11 23:40:43.393514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333045 ] 00:27:22.539 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.539 [2024-07-11 23:40:43.463065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.797 [2024-07-11 23:40:43.549113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.691 Running I/O for 10 seconds... 00:27:24.692 23:40:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:24.692 23:40:45 -- common/autotest_common.sh@852 -- # return 0 00:27:24.692 23:40:45 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:24.692 23:40:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:24.692 23:40:45 -- common/autotest_common.sh@10 -- # set +x 00:27:24.692 23:40:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:24.692 23:40:45 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:24.692 23:40:45 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:24.692 23:40:45 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:24.692 23:40:45 -- target/shutdown.sh@57 -- # local ret=1 00:27:24.692 23:40:45 -- target/shutdown.sh@58 -- # local i 00:27:24.692 23:40:45 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:24.692 23:40:45 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:24.692 23:40:45 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:24.692 23:40:45 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:24.692 23:40:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:24.692 23:40:45 -- common/autotest_common.sh@10 -- # set +x 00:27:24.692 23:40:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:24.692 23:40:45 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:24.692 23:40:45 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:24.692 23:40:45 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:24.949 23:40:45 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:24.949 23:40:45 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:24.949 23:40:45 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:24.949 23:40:45 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:24.949 23:40:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:24.949 23:40:45 -- common/autotest_common.sh@10 -- # set +x 00:27:24.949 23:40:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:25.207 23:40:45 -- target/shutdown.sh@60 -- # read_io_count=129 00:27:25.207 23:40:45 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:27:25.207 23:40:45 -- target/shutdown.sh@64 -- # ret=0 00:27:25.207 23:40:45 -- target/shutdown.sh@65 -- # break 00:27:25.207 23:40:45 -- target/shutdown.sh@69 -- # return 0 00:27:25.207 23:40:45 -- target/shutdown.sh@109 -- # killprocess 333045 00:27:25.207 23:40:45 -- common/autotest_common.sh@926 -- # '[' -z 333045 ']' 00:27:25.207 23:40:45 -- common/autotest_common.sh@930 -- # kill -0 333045 00:27:25.207 23:40:45 -- common/autotest_common.sh@931 -- # uname 00:27:25.207 23:40:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:25.207 23:40:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 333045 00:27:25.207 23:40:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:25.207 23:40:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:25.207 23:40:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 333045' 00:27:25.207 killing process with pid 333045 00:27:25.207 23:40:45 -- common/autotest_common.sh@945 -- # kill 333045 00:27:25.207 23:40:45 -- common/autotest_common.sh@950 -- # wait 333045 00:27:25.207 Received shutdown signal, test time was about 0.616671 seconds 00:27:25.207 00:27:25.207 Latency(us) 00:27:25.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.207 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.207 Verification LBA range: start 0x0 length 0x400 00:27:25.207 Nvme1n1 : 0.58 390.81 24.43 0.00 0.00 157817.42 22816.24 157674.76 00:27:25.207 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.207 Verification LBA range: start 0x0 length 0x400 00:27:25.207 Nvme2n1 : 0.62 370.23 23.14 0.00 0.00 154131.18 23592.96 147577.36 00:27:25.207 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.207 Verification LBA range: start 0x0 length 0x400 00:27:25.207 Nvme3n1 : 0.58 389.92 24.37 0.00 0.00 154090.93 23495.87 139033.41 00:27:25.207 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.207 Verification LBA range: start 0x0 length 0x400 00:27:25.207 Nvme4n1 : 0.58 393.98 24.62 0.00 0.00 150137.48 24369.68 120392.06 00:27:25.207 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.207 Verification LBA range: start 0x0 length 0x400 00:27:25.207 Nvme5n1 : 0.58 393.07 24.57 0.00 0.00 148409.02 24272.59 121168.78 00:27:25.207 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.207 Verification LBA range: start 0x0 length 0x400 00:27:25.207 Nvme6n1 : 0.59 388.24 24.27 0.00 0.00 148347.27 24078.41 130489.46 00:27:25.207 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.207 Verification LBA range: start 0x0 length 0x400 00:27:25.207 Nvme7n1 : 0.58 392.11 24.51 0.00 0.00 144633.99 23301.69 114178.28 00:27:25.207 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.207 Verification LBA range: start 0x0 length 0x400 00:27:25.207 Nvme8n1 : 0.59 386.83 24.18 0.00 0.00 144904.40 22816.24 125829.12 00:27:25.207 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.207 Verification LBA range: start 0x0 length 0x400 00:27:25.207 Nvme9n1 : 0.59 385.09 24.07 0.00 0.00 143505.95 23010.42 128159.29 00:27:25.207 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.207 Verification LBA range: start 0x0 length 0x400 00:27:25.207 Nvme10n1 : 0.55 347.20 21.70 0.00 0.00 155730.91 22233.69 127382.57 00:27:25.207 =================================================================================================================== 00:27:25.207 Total : 3837.50 239.84 0.00 0.00 150076.62 22233.69 157674.76 00:27:25.465 23:40:46 -- target/shutdown.sh@112 -- # sleep 1 00:27:26.397 23:40:47 -- target/shutdown.sh@113 -- # kill -0 332856 00:27:26.397 23:40:47 -- target/shutdown.sh@115 -- # stoptarget 00:27:26.397 23:40:47 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:26.397 23:40:47 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:26.397 23:40:47 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:26.397 23:40:47 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:26.397 23:40:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:26.397 23:40:47 -- nvmf/common.sh@116 -- # sync 00:27:26.397 23:40:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:26.397 23:40:47 -- nvmf/common.sh@119 -- # set +e 00:27:26.397 23:40:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:26.397 23:40:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:26.655 rmmod nvme_tcp 00:27:26.655 rmmod nvme_fabrics 00:27:26.655 rmmod nvme_keyring 00:27:26.655 23:40:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:26.655 23:40:47 -- nvmf/common.sh@123 -- # set -e 00:27:26.655 23:40:47 -- nvmf/common.sh@124 -- # return 0 00:27:26.655 23:40:47 -- nvmf/common.sh@477 -- # '[' -n 332856 ']' 00:27:26.655 23:40:47 -- nvmf/common.sh@478 -- # killprocess 332856 00:27:26.655 23:40:47 -- common/autotest_common.sh@926 -- # '[' -z 332856 ']' 00:27:26.655 23:40:47 -- common/autotest_common.sh@930 -- # kill -0 332856 00:27:26.655 23:40:47 -- common/autotest_common.sh@931 -- # uname 00:27:26.655 23:40:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:26.655 23:40:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 332856 00:27:26.655 23:40:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:26.655 23:40:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:26.655 23:40:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 332856' 00:27:26.655 killing process with pid 332856 00:27:26.655 23:40:47 -- common/autotest_common.sh@945 -- # kill 332856 00:27:26.655 23:40:47 -- common/autotest_common.sh@950 -- # wait 332856 00:27:27.222 23:40:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:27.222 23:40:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:27.222 23:40:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:27.222 23:40:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.222 23:40:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:27.222 23:40:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.222 23:40:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.222 23:40:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.137 23:40:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:29.138 00:27:29.138 real 0m8.612s 00:27:29.138 user 0m27.625s 00:27:29.138 sys 0m1.645s 00:27:29.138 23:40:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.138 23:40:50 -- common/autotest_common.sh@10 -- # set +x 00:27:29.138 ************************************ 00:27:29.138 END TEST nvmf_shutdown_tc2 00:27:29.138 ************************************ 00:27:29.138 23:40:50 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:29.138 23:40:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:29.138 23:40:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:29.138 23:40:50 -- common/autotest_common.sh@10 -- # set +x 00:27:29.138 ************************************ 00:27:29.138 START TEST nvmf_shutdown_tc3 00:27:29.138 ************************************ 00:27:29.138 23:40:50 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:27:29.138 23:40:50 -- target/shutdown.sh@120 -- # starttarget 00:27:29.138 23:40:50 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:29.138 23:40:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:29.138 23:40:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.138 23:40:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:29.138 23:40:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:29.138 23:40:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:29.138 23:40:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.138 23:40:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.138 23:40:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.138 23:40:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:29.138 23:40:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:29.138 23:40:50 -- common/autotest_common.sh@10 -- # set +x 00:27:29.138 23:40:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:29.138 23:40:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:29.138 23:40:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:29.138 23:40:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:29.138 23:40:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:29.138 23:40:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:29.138 23:40:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:29.138 23:40:50 -- nvmf/common.sh@294 -- # net_devs=() 00:27:29.138 23:40:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:29.138 23:40:50 -- nvmf/common.sh@295 -- # e810=() 00:27:29.138 23:40:50 -- nvmf/common.sh@295 -- # local -ga e810 00:27:29.138 23:40:50 -- nvmf/common.sh@296 -- # x722=() 00:27:29.138 23:40:50 -- nvmf/common.sh@296 -- # local -ga x722 00:27:29.138 23:40:50 -- nvmf/common.sh@297 -- # mlx=() 00:27:29.138 23:40:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:29.138 23:40:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.138 23:40:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.138 23:40:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.138 23:40:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.138 23:40:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.138 23:40:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.138 23:40:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.138 23:40:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.138 23:40:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.138 23:40:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.138 23:40:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.138 23:40:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:29.138 23:40:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:29.138 23:40:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:29.138 23:40:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:29.138 23:40:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:29.138 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:29.138 23:40:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:29.138 23:40:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:29.138 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:29.138 23:40:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:29.138 23:40:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:29.138 23:40:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.138 23:40:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:29.138 23:40:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.138 23:40:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:29.138 Found net devices under 0000:84:00.0: cvl_0_0 00:27:29.138 23:40:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.138 23:40:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:29.138 23:40:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.138 23:40:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:29.138 23:40:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.138 23:40:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:29.138 Found net devices under 0000:84:00.1: cvl_0_1 00:27:29.138 23:40:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.138 23:40:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:29.138 23:40:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:29.138 23:40:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:29.138 23:40:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:29.138 23:40:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.138 23:40:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.138 23:40:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.138 23:40:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:29.138 23:40:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.138 23:40:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.138 23:40:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:29.138 23:40:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.138 23:40:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.138 23:40:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:29.138 23:40:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:29.138 23:40:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.399 23:40:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.399 23:40:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.399 23:40:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.399 23:40:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:29.399 23:40:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.399 23:40:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.399 23:40:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.399 23:40:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:29.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:27:29.399 00:27:29.399 --- 10.0.0.2 ping statistics --- 00:27:29.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.399 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:27:29.399 23:40:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:27:29.399 00:27:29.399 --- 10.0.0.1 ping statistics --- 00:27:29.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.399 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:27:29.399 23:40:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.399 23:40:50 -- nvmf/common.sh@410 -- # return 0 00:27:29.399 23:40:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:29.399 23:40:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.399 23:40:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:29.399 23:40:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:29.399 23:40:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.399 23:40:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:29.399 23:40:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:29.399 23:40:50 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:29.399 23:40:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:29.399 23:40:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:29.399 23:40:50 -- common/autotest_common.sh@10 -- # set +x 00:27:29.399 23:40:50 -- nvmf/common.sh@469 -- # nvmfpid=333975 00:27:29.399 23:40:50 -- nvmf/common.sh@470 -- # waitforlisten 333975 00:27:29.399 23:40:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:29.399 23:40:50 -- common/autotest_common.sh@819 -- # '[' -z 333975 ']' 00:27:29.399 23:40:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.399 23:40:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:29.399 23:40:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.399 23:40:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:29.399 23:40:50 -- common/autotest_common.sh@10 -- # set +x 00:27:29.399 [2024-07-11 23:40:50.310451] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:29.399 [2024-07-11 23:40:50.310543] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.657 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.657 [2024-07-11 23:40:50.418125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.657 [2024-07-11 23:40:50.542583] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:29.657 [2024-07-11 23:40:50.542794] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.657 [2024-07-11 23:40:50.542842] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.657 [2024-07-11 23:40:50.542879] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.657 [2024-07-11 23:40:50.542963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.657 [2024-07-11 23:40:50.543035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.657 [2024-07-11 23:40:50.543068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:29.657 [2024-07-11 23:40:50.543092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.915 23:40:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:29.915 23:40:50 -- common/autotest_common.sh@852 -- # return 0 00:27:29.915 23:40:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:29.915 23:40:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:29.915 23:40:50 -- common/autotest_common.sh@10 -- # set +x 00:27:29.915 23:40:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.915 23:40:50 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:29.915 23:40:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.915 23:40:50 -- common/autotest_common.sh@10 -- # set +x 00:27:29.915 [2024-07-11 23:40:50.731602] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.915 23:40:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.915 23:40:50 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:29.915 23:40:50 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:29.915 23:40:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:29.915 23:40:50 -- common/autotest_common.sh@10 -- # set +x 00:27:29.915 23:40:50 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:29.915 23:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.915 23:40:50 -- target/shutdown.sh@28 -- # cat 00:27:29.915 23:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.915 23:40:50 -- target/shutdown.sh@28 -- # cat 00:27:29.915 23:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.915 23:40:50 -- target/shutdown.sh@28 -- # cat 00:27:29.915 23:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.915 23:40:50 -- target/shutdown.sh@28 -- # cat 00:27:29.915 23:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.915 23:40:50 -- target/shutdown.sh@28 -- # cat 00:27:29.915 23:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.915 23:40:50 -- target/shutdown.sh@28 -- # cat 00:27:29.915 23:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.915 23:40:50 -- target/shutdown.sh@28 -- # cat 00:27:29.915 23:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.915 23:40:50 -- target/shutdown.sh@28 -- # cat 00:27:29.915 23:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.915 23:40:50 -- target/shutdown.sh@28 -- # cat 00:27:29.915 23:40:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.915 23:40:50 -- target/shutdown.sh@28 -- # cat 00:27:29.915 23:40:50 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:29.915 23:40:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.915 23:40:50 -- common/autotest_common.sh@10 -- # set +x 00:27:29.915 Malloc1 00:27:29.915 [2024-07-11 23:40:50.822244] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.915 Malloc2 00:27:30.173 Malloc3 00:27:30.173 Malloc4 00:27:30.173 Malloc5 00:27:30.173 Malloc6 00:27:30.173 Malloc7 00:27:30.431 Malloc8 00:27:30.431 Malloc9 00:27:30.431 Malloc10 00:27:30.431 23:40:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.431 23:40:51 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:30.431 23:40:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:30.431 23:40:51 -- common/autotest_common.sh@10 -- # set +x 00:27:30.431 23:40:51 -- target/shutdown.sh@124 -- # perfpid=334160 00:27:30.431 23:40:51 -- target/shutdown.sh@125 -- # waitforlisten 334160 /var/tmp/bdevperf.sock 00:27:30.431 23:40:51 -- common/autotest_common.sh@819 -- # '[' -z 334160 ']' 00:27:30.431 23:40:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:30.431 23:40:51 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:30.431 23:40:51 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:30.431 23:40:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:30.431 23:40:51 -- nvmf/common.sh@520 -- # config=() 00:27:30.431 23:40:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:30.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:30.431 23:40:51 -- nvmf/common.sh@520 -- # local subsystem config 00:27:30.431 23:40:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:30.431 23:40:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:30.431 23:40:51 -- common/autotest_common.sh@10 -- # set +x 00:27:30.431 23:40:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:30.431 { 00:27:30.431 "params": { 00:27:30.431 "name": "Nvme$subsystem", 00:27:30.431 "trtype": "$TEST_TRANSPORT", 00:27:30.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.431 "adrfam": "ipv4", 00:27:30.431 "trsvcid": "$NVMF_PORT", 00:27:30.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.431 "hdgst": ${hdgst:-false}, 00:27:30.431 "ddgst": ${ddgst:-false} 00:27:30.431 }, 00:27:30.431 "method": "bdev_nvme_attach_controller" 00:27:30.431 } 00:27:30.431 EOF 00:27:30.431 )") 00:27:30.431 23:40:51 -- nvmf/common.sh@542 -- # cat 00:27:30.431 23:40:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:30.431 23:40:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:30.431 { 00:27:30.431 "params": { 00:27:30.431 "name": "Nvme$subsystem", 00:27:30.431 "trtype": "$TEST_TRANSPORT", 00:27:30.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.431 "adrfam": "ipv4", 00:27:30.431 "trsvcid": "$NVMF_PORT", 00:27:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.432 "hdgst": ${hdgst:-false}, 00:27:30.432 "ddgst": ${ddgst:-false} 00:27:30.432 }, 00:27:30.432 "method": "bdev_nvme_attach_controller" 00:27:30.432 } 00:27:30.432 EOF 00:27:30.432 )") 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # cat 00:27:30.432 23:40:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:30.432 { 00:27:30.432 "params": { 00:27:30.432 "name": "Nvme$subsystem", 00:27:30.432 "trtype": "$TEST_TRANSPORT", 00:27:30.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.432 "adrfam": "ipv4", 00:27:30.432 "trsvcid": "$NVMF_PORT", 00:27:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.432 "hdgst": ${hdgst:-false}, 00:27:30.432 "ddgst": ${ddgst:-false} 00:27:30.432 }, 00:27:30.432 "method": "bdev_nvme_attach_controller" 00:27:30.432 } 00:27:30.432 EOF 00:27:30.432 )") 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # cat 00:27:30.432 23:40:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:30.432 { 00:27:30.432 "params": { 00:27:30.432 "name": "Nvme$subsystem", 00:27:30.432 "trtype": "$TEST_TRANSPORT", 00:27:30.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.432 "adrfam": "ipv4", 00:27:30.432 "trsvcid": "$NVMF_PORT", 00:27:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.432 "hdgst": ${hdgst:-false}, 00:27:30.432 "ddgst": ${ddgst:-false} 00:27:30.432 }, 00:27:30.432 "method": "bdev_nvme_attach_controller" 00:27:30.432 } 00:27:30.432 EOF 00:27:30.432 )") 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # cat 00:27:30.432 23:40:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:30.432 { 00:27:30.432 "params": { 00:27:30.432 "name": "Nvme$subsystem", 00:27:30.432 "trtype": "$TEST_TRANSPORT", 00:27:30.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.432 "adrfam": "ipv4", 00:27:30.432 "trsvcid": "$NVMF_PORT", 00:27:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.432 "hdgst": ${hdgst:-false}, 00:27:30.432 "ddgst": ${ddgst:-false} 00:27:30.432 }, 00:27:30.432 "method": "bdev_nvme_attach_controller" 00:27:30.432 } 00:27:30.432 EOF 00:27:30.432 )") 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # cat 00:27:30.432 23:40:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:30.432 { 00:27:30.432 "params": { 00:27:30.432 "name": "Nvme$subsystem", 00:27:30.432 "trtype": "$TEST_TRANSPORT", 00:27:30.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.432 "adrfam": "ipv4", 00:27:30.432 "trsvcid": "$NVMF_PORT", 00:27:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.432 "hdgst": ${hdgst:-false}, 00:27:30.432 "ddgst": ${ddgst:-false} 00:27:30.432 }, 00:27:30.432 "method": "bdev_nvme_attach_controller" 00:27:30.432 } 00:27:30.432 EOF 00:27:30.432 )") 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # cat 00:27:30.432 23:40:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:30.432 { 00:27:30.432 "params": { 00:27:30.432 "name": "Nvme$subsystem", 00:27:30.432 "trtype": "$TEST_TRANSPORT", 00:27:30.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.432 "adrfam": "ipv4", 00:27:30.432 "trsvcid": "$NVMF_PORT", 00:27:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.432 "hdgst": ${hdgst:-false}, 00:27:30.432 "ddgst": ${ddgst:-false} 00:27:30.432 }, 00:27:30.432 "method": "bdev_nvme_attach_controller" 00:27:30.432 } 00:27:30.432 EOF 00:27:30.432 )") 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # cat 00:27:30.432 23:40:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:30.432 { 00:27:30.432 "params": { 00:27:30.432 "name": "Nvme$subsystem", 00:27:30.432 "trtype": "$TEST_TRANSPORT", 00:27:30.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.432 "adrfam": "ipv4", 00:27:30.432 "trsvcid": "$NVMF_PORT", 00:27:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.432 "hdgst": ${hdgst:-false}, 00:27:30.432 "ddgst": ${ddgst:-false} 00:27:30.432 }, 00:27:30.432 "method": "bdev_nvme_attach_controller" 00:27:30.432 } 00:27:30.432 EOF 00:27:30.432 )") 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # cat 00:27:30.432 23:40:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:30.432 { 00:27:30.432 "params": { 00:27:30.432 "name": "Nvme$subsystem", 00:27:30.432 "trtype": "$TEST_TRANSPORT", 00:27:30.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.432 "adrfam": "ipv4", 00:27:30.432 "trsvcid": "$NVMF_PORT", 00:27:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.432 "hdgst": ${hdgst:-false}, 00:27:30.432 "ddgst": ${ddgst:-false} 00:27:30.432 }, 00:27:30.432 "method": "bdev_nvme_attach_controller" 00:27:30.432 } 00:27:30.432 EOF 00:27:30.432 )") 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # cat 00:27:30.432 23:40:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:30.432 { 00:27:30.432 "params": { 00:27:30.432 "name": "Nvme$subsystem", 00:27:30.432 "trtype": "$TEST_TRANSPORT", 00:27:30.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.432 "adrfam": "ipv4", 00:27:30.432 "trsvcid": "$NVMF_PORT", 00:27:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.432 "hdgst": ${hdgst:-false}, 00:27:30.432 "ddgst": ${ddgst:-false} 00:27:30.432 }, 00:27:30.432 "method": "bdev_nvme_attach_controller" 00:27:30.432 } 00:27:30.432 EOF 00:27:30.432 )") 00:27:30.432 23:40:51 -- nvmf/common.sh@542 -- # cat 00:27:30.432 23:40:51 -- nvmf/common.sh@544 -- # jq . 00:27:30.432 23:40:51 -- nvmf/common.sh@545 -- # IFS=, 00:27:30.432 23:40:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:30.432 "params": { 00:27:30.432 "name": "Nvme1", 00:27:30.432 "trtype": "tcp", 00:27:30.432 "traddr": "10.0.0.2", 00:27:30.432 "adrfam": "ipv4", 00:27:30.432 "trsvcid": "4420", 00:27:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:30.432 "hdgst": false, 00:27:30.432 "ddgst": false 00:27:30.432 }, 00:27:30.432 "method": "bdev_nvme_attach_controller" 00:27:30.432 },{ 00:27:30.432 "params": { 00:27:30.432 "name": "Nvme2", 00:27:30.432 "trtype": "tcp", 00:27:30.432 "traddr": "10.0.0.2", 00:27:30.432 "adrfam": "ipv4", 00:27:30.432 "trsvcid": "4420", 00:27:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:30.432 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:30.432 "hdgst": false, 00:27:30.432 "ddgst": false 00:27:30.432 }, 00:27:30.432 "method": "bdev_nvme_attach_controller" 00:27:30.432 },{ 00:27:30.432 "params": { 00:27:30.432 "name": "Nvme3", 00:27:30.432 "trtype": "tcp", 00:27:30.432 "traddr": "10.0.0.2", 00:27:30.432 "adrfam": "ipv4", 00:27:30.432 "trsvcid": "4420", 00:27:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:30.432 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:30.432 "hdgst": false, 00:27:30.432 "ddgst": false 00:27:30.432 }, 00:27:30.432 "method": "bdev_nvme_attach_controller" 00:27:30.432 },{ 00:27:30.432 "params": { 00:27:30.432 "name": "Nvme4", 00:27:30.432 "trtype": "tcp", 00:27:30.432 "traddr": "10.0.0.2", 00:27:30.432 "adrfam": "ipv4", 00:27:30.432 "trsvcid": "4420", 00:27:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:30.432 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:30.432 "hdgst": false, 00:27:30.432 "ddgst": false 00:27:30.432 }, 00:27:30.432 "method": "bdev_nvme_attach_controller" 00:27:30.432 },{ 00:27:30.432 "params": { 00:27:30.432 "name": "Nvme5", 00:27:30.432 "trtype": "tcp", 00:27:30.432 "traddr": "10.0.0.2", 00:27:30.432 "adrfam": "ipv4", 00:27:30.432 "trsvcid": "4420", 00:27:30.432 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:30.432 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:30.433 "hdgst": false, 00:27:30.433 "ddgst": false 00:27:30.433 }, 00:27:30.433 "method": "bdev_nvme_attach_controller" 00:27:30.433 },{ 00:27:30.433 "params": { 00:27:30.433 "name": "Nvme6", 00:27:30.433 "trtype": "tcp", 00:27:30.433 "traddr": "10.0.0.2", 00:27:30.433 "adrfam": "ipv4", 00:27:30.433 "trsvcid": "4420", 00:27:30.433 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:30.433 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:30.433 "hdgst": false, 00:27:30.433 "ddgst": false 00:27:30.433 }, 00:27:30.433 "method": "bdev_nvme_attach_controller" 00:27:30.433 },{ 00:27:30.433 "params": { 00:27:30.433 "name": "Nvme7", 00:27:30.433 "trtype": "tcp", 00:27:30.433 "traddr": "10.0.0.2", 00:27:30.433 "adrfam": "ipv4", 00:27:30.433 "trsvcid": "4420", 00:27:30.433 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:30.433 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:30.433 "hdgst": false, 00:27:30.433 "ddgst": false 00:27:30.433 }, 00:27:30.433 "method": "bdev_nvme_attach_controller" 00:27:30.433 },{ 00:27:30.433 "params": { 00:27:30.433 "name": "Nvme8", 00:27:30.433 "trtype": "tcp", 00:27:30.433 "traddr": "10.0.0.2", 00:27:30.433 "adrfam": "ipv4", 00:27:30.433 "trsvcid": "4420", 00:27:30.433 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:30.433 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:30.433 "hdgst": false, 00:27:30.433 "ddgst": false 00:27:30.433 }, 00:27:30.433 "method": "bdev_nvme_attach_controller" 00:27:30.433 },{ 00:27:30.433 "params": { 00:27:30.433 "name": "Nvme9", 00:27:30.433 "trtype": "tcp", 00:27:30.433 "traddr": "10.0.0.2", 00:27:30.433 "adrfam": "ipv4", 00:27:30.433 "trsvcid": "4420", 00:27:30.433 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:30.433 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:30.433 "hdgst": false, 00:27:30.433 "ddgst": false 00:27:30.433 }, 00:27:30.433 "method": "bdev_nvme_attach_controller" 00:27:30.433 },{ 00:27:30.433 "params": { 00:27:30.433 "name": "Nvme10", 00:27:30.433 "trtype": "tcp", 00:27:30.433 "traddr": "10.0.0.2", 00:27:30.433 "adrfam": "ipv4", 00:27:30.433 "trsvcid": "4420", 00:27:30.433 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:30.433 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:30.433 "hdgst": false, 00:27:30.433 "ddgst": false 00:27:30.433 }, 00:27:30.433 "method": "bdev_nvme_attach_controller" 00:27:30.433 }' 00:27:30.433 [2024-07-11 23:40:51.362686] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:30.433 [2024-07-11 23:40:51.362770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334160 ] 00:27:30.690 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.690 [2024-07-11 23:40:51.437119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.690 [2024-07-11 23:40:51.525254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.214 Running I/O for 10 seconds... 00:27:33.483 23:40:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:33.483 23:40:54 -- common/autotest_common.sh@852 -- # return 0 00:27:33.483 23:40:54 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:33.483 23:40:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.483 23:40:54 -- common/autotest_common.sh@10 -- # set +x 00:27:33.483 23:40:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:33.483 23:40:54 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:33.483 23:40:54 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:33.483 23:40:54 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:33.483 23:40:54 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:33.483 23:40:54 -- target/shutdown.sh@57 -- # local ret=1 00:27:33.483 23:40:54 -- target/shutdown.sh@58 -- # local i 00:27:33.483 23:40:54 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:33.483 23:40:54 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:33.483 23:40:54 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:33.483 23:40:54 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:33.483 23:40:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.483 23:40:54 -- common/autotest_common.sh@10 -- # set +x 00:27:33.483 23:40:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:33.483 23:40:54 -- target/shutdown.sh@60 -- # read_io_count=254 00:27:33.483 23:40:54 -- target/shutdown.sh@63 -- # '[' 254 -ge 100 ']' 00:27:33.483 23:40:54 -- target/shutdown.sh@64 -- # ret=0 00:27:33.483 23:40:54 -- target/shutdown.sh@65 -- # break 00:27:33.483 23:40:54 -- target/shutdown.sh@69 -- # return 0 00:27:33.483 23:40:54 -- target/shutdown.sh@134 -- # killprocess 333975 00:27:33.483 23:40:54 -- common/autotest_common.sh@926 -- # '[' -z 333975 ']' 00:27:33.483 23:40:54 -- common/autotest_common.sh@930 -- # kill -0 333975 00:27:33.483 23:40:54 -- common/autotest_common.sh@931 -- # uname 00:27:33.483 23:40:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:33.483 23:40:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 333975 00:27:33.796 23:40:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:33.796 23:40:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:33.796 23:40:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 333975' 00:27:33.796 killing process with pid 333975 00:27:33.796 23:40:54 -- common/autotest_common.sh@945 -- # kill 333975 00:27:33.796 23:40:54 -- common/autotest_common.sh@950 -- # wait 333975 00:27:33.796 [2024-07-11 23:40:54.430054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.796 [2024-07-11 23:40:54.430400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.430997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.431010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96390 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.432613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98d20 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.433988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.434001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.434013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.434025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.434038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.434051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.434064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.434076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.434092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.434108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.434136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.434158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.434171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.797 [2024-07-11 23:40:54.434194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.434504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96820 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.436989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.437002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.437022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.437035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.437048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.437060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.437072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.437084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cd0 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.443056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.443098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.443113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.443154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.443169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.443183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.443196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.443209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.443221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.798 [2024-07-11 23:40:54.443246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.443929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97160 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.444997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.799 [2024-07-11 23:40:54.445318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-11 23:40:54.445433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 he state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.445483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-11 23:40:54.445497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 he state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with t[2024-07-11 23:40:54.445510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:33.800 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.445528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.445528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.445544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-11 23:40:54.445557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 he state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-11 23:40:54.445571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 he state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with t[2024-07-11 23:40:54.445586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2741940 is same he state(5) to be set 00:27:33.800 with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.445652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-11 23:40:54.445665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 he state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.445682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.445697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97610 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.445722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.445736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.445758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.445770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2572670 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.445835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.445850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.445863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.445876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.445889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.445903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.445916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.445929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258df60 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.445985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.446005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.446019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.446037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.446052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.446065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.446079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.446093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.446107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25753d0 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.446206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.446228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.446253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.446267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.446282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.446295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.446310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.800 [2024-07-11 23:40:54.446323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.800 [2024-07-11 23:40:54.446336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2739190 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.446882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.446910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.446934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.800 [2024-07-11 23:40:54.446946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.446958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.446970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.446982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40576 len:128[2024-07-11 23:40:54.447224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 he state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with t[2024-07-11 23:40:54.447239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:33.801 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with t[2024-07-11 23:40:54.447327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35200 len:128he state(5) to be set 00:27:33.801 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with t[2024-07-11 23:40:54.447346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:33.801 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 23:40:54.447443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 he state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with t[2024-07-11 23:40:54.447491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36480 len:12he state(5) to be set 00:27:33.801 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with t[2024-07-11 23:40:54.447625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:33.801 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with t[2024-07-11 23:40:54.447705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37248 len:12he state(5) to be set 00:27:33.801 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.801 [2024-07-11 23:40:54.447719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.801 [2024-07-11 23:40:54.447732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.801 [2024-07-11 23:40:54.447738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.447745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.802 [2024-07-11 23:40:54.447752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.447760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.802 [2024-07-11 23:40:54.447769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.447774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.802 [2024-07-11 23:40:54.447784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.447786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.802 [2024-07-11 23:40:54.447799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.802 [2024-07-11 23:40:54.447799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.447811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.802 [2024-07-11 23:40:54.447815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.447823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.802 [2024-07-11 23:40:54.447831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.447836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with the state(5) to be set 00:27:33.802 [2024-07-11 23:40:54.447846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 23:40:54.447848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97ac0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 he state(5) to be set 00:27:33.802 [2024-07-11 23:40:54.447863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.447878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.447894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.447908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.447923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.447937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.447953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.447966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.447982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.447996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.802 [2024-07-11 23:40:54.448975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.802 [2024-07-11 23:40:54.448991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.803 [2024-07-11 23:40:54.449006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.803 [2024-07-11 23:40:54.449021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.803 [2024-07-11 23:40:54.449036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.803 [2024-07-11 23:40:54.449052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.803 [2024-07-11 23:40:54.449067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.803 [2024-07-11 23:40:54.449088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.803 [2024-07-11 23:40:54.449103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.803 [2024-07-11 23:40:54.449118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.803 [2024-07-11 23:40:54.449133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.803 [2024-07-11 23:40:54.449157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.803 [2024-07-11 23:40:54.449173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.803 [2024-07-11 23:40:54.449198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.803 [2024-07-11 23:40:54.449213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.803 [2024-07-11 23:40:54.449229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.803 [2024-07-11 23:40:54.449245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.803 [2024-07-11 23:40:54.449262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.803 [2024-07-11 23:40:54.449276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.803 [2024-07-11 23:40:54.449295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x377cef0 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449828] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x377cef0 was disconnected and freed. reset controller. 00:27:33.803 [2024-07-11 23:40:54.449841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.449987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.450241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97f50 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.451382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.451409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.451423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.451436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.451448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.803 [2024-07-11 23:40:54.451461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451759] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:33.804 [2024-07-11 23:40:54.451777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with t[2024-07-11 23:40:54.451800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2741940 (9): he state(5) to be set 00:27:33.804 Bad file descriptor 00:27:33.804 [2024-07-11 23:40:54.451819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451892] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.804 [2024-07-11 23:40:54.451908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.451987] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.804 [2024-07-11 23:40:54.451999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.452249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd983e0 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453129] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.804 [2024-07-11 23:40:54.453150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.804 [2024-07-11 23:40:54.453423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.453867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98870 is same with the state(5) to be set 00:27:33.805 [2024-07-11 23:40:54.454136] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.805 [2024-07-11 23:40:54.454219] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.805 [2024-07-11 23:40:54.454601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.454626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.454646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.454662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.454679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.454693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.454709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.454723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.454739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.454753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.454769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.454784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.454800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.454815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.454831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.454845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.454860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.454874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.454889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.454903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.454924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.454939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.454956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.454970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.454986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.455000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.455021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.455036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.455051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.455065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.455081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.455104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.455120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.455133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.455174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.455200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.455216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.455231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.455256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.805 [2024-07-11 23:40:54.455270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.805 [2024-07-11 23:40:54.455286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.455774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.455788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2604ef0 is same with the state(5) to be set 00:27:33.806 [2024-07-11 23:40:54.455881] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2604ef0 was disconnected and freed. reset controller. 00:27:33.806 [2024-07-11 23:40:54.455984] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.806 [2024-07-11 23:40:54.456268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2596130 is same with the state(5) to be set 00:27:33.806 [2024-07-11 23:40:54.456429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2572670 (9): Bad file descriptor 00:27:33.806 [2024-07-11 23:40:54.456473] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x258df60 (9): Bad file descriptor 00:27:33.806 [2024-07-11 23:40:54.456522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2640b30 is same with the state(5) to be set 00:27:33.806 [2024-07-11 23:40:54.456673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25753d0 (9): Bad file descriptor 00:27:33.806 [2024-07-11 23:40:54.456716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2640f60 is same with the state(5) to be set 00:27:33.806 [2024-07-11 23:40:54.456887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.456976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.456989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.457002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26392c0 is same with the state(5) to be set 00:27:33.806 [2024-07-11 23:40:54.457046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.457066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.457081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.457094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.457108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.457129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.457165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.806 [2024-07-11 23:40:54.457183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.806 [2024-07-11 23:40:54.457202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2594110 is same with the state(5) to be set 00:27:33.806 [2024-07-11 23:40:54.457232] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2739190 (9): Bad file descriptor 00:27:33.806 [2024-07-11 23:40:54.458386] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.806 [2024-07-11 23:40:54.458451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.806 [2024-07-11 23:40:54.458473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.458968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.458985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.807 [2024-07-11 23:40:54.459858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.807 [2024-07-11 23:40:54.459873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.459889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.459904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.459921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.459936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.459952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.459967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.459984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.459999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.460471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.460489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.478815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.478877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.478897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.478913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.478930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c19a0 is same with the state(5) to be set 00:27:33.808 [2024-07-11 23:40:54.479041] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26c19a0 was disconnected and freed. reset controller. 00:27:33.808 [2024-07-11 23:40:54.479265] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:33.808 [2024-07-11 23:40:54.479422] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2741940 (9): Bad file descriptor 00:27:33.808 [2024-07-11 23:40:54.479455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2596130 (9): Bad file descriptor 00:27:33.808 [2024-07-11 23:40:54.479515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2640b30 (9): Bad file descriptor 00:27:33.808 [2024-07-11 23:40:54.479550] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2640f60 (9): Bad file descriptor 00:27:33.808 [2024-07-11 23:40:54.479575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26392c0 (9): Bad file descriptor 00:27:33.808 [2024-07-11 23:40:54.479605] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2594110 (9): Bad file descriptor 00:27:33.808 [2024-07-11 23:40:54.481265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-07-11 23:40:54.481456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.808 [2024-07-11 23:40:54.481482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25753d0 with addr=10.0.0.2, port=4420 00:27:33.808 [2024-07-11 23:40:54.481509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25753d0 is same with the state(5) to be set 00:27:33.808 [2024-07-11 23:40:54.481597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.481619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.481641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.481658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.481676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.481692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.481709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.481724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.481741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.481768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.481785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.481800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.481817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.481833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.481849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.481863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.481880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.481896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.481913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.481928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.481946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.481961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.481978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.481994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.482011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.482027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.482043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.482058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.482075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.808 [2024-07-11 23:40:54.482090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.808 [2024-07-11 23:40:54.482106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.482970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.482990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.483006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.483023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.483037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.483054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.483069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.483086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.483101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.483117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.483132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.483156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.483180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.483197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.483212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.483228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.483243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.483259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.483274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.809 [2024-07-11 23:40:54.483291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.809 [2024-07-11 23:40:54.483305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.483322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.483337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.483353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.483368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.483395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.483412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.483429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.483444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.483460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.483475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.483492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.483506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.483523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.483538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.483554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.483568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.483584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.483599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.483617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.483632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.483648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.483662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.483678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.483693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.483708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26187f0 is same with the state(5) to be set 00:27:33.810 [2024-07-11 23:40:54.484942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.484966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.484989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.810 [2024-07-11 23:40:54.485917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.810 [2024-07-11 23:40:54.485932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.485949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.485963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.485980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.485994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.486984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.486998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.487014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.487029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.487044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2562a70 is same with the state(5) to be set 00:27:33.811 [2024-07-11 23:40:54.488537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.488566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.488589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.488605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.488622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.488636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.488653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.488667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.488684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.488698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.488715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.488730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.488747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.811 [2024-07-11 23:40:54.488762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.811 [2024-07-11 23:40:54.488778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.488792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.488809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.488824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.488841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.488855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.488872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.488886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.488904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.488919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.488935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.488949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.488972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.488989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.489970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.489986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.490001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.490018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.490033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.490049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.490063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.490080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.490094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.490111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.490125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.812 [2024-07-11 23:40:54.490148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.812 [2024-07-11 23:40:54.490165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.490208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.490241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.490271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.490301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.490331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.490362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.490393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.490423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.490453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.490486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.490516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.490547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.490577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.490613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.490628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bc260 is same with the state(5) to be set 00:27:33.813 [2024-07-11 23:40:54.491990] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.813 [2024-07-11 23:40:54.492023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:33.813 [2024-07-11 23:40:54.492042] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:33.813 [2024-07-11 23:40:54.492107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25753d0 (9): Bad file descriptor 00:27:33.813 [2024-07-11 23:40:54.492131] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:33.813 [2024-07-11 23:40:54.492158] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:33.813 [2024-07-11 23:40:54.492178] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:33.813 [2024-07-11 23:40:54.492241] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.813 [2024-07-11 23:40:54.492284] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.813 [2024-07-11 23:40:54.492322] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.813 [2024-07-11 23:40:54.492702] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.813 [2024-07-11 23:40:54.492741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:33.813 [2024-07-11 23:40:54.492762] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.813 [2024-07-11 23:40:54.493026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-07-11 23:40:54.493262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-07-11 23:40:54.493289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2572670 with addr=10.0.0.2, port=4420 00:27:33.813 [2024-07-11 23:40:54.493307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2572670 is same with the state(5) to be set 00:27:33.813 [2024-07-11 23:40:54.493512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-07-11 23:40:54.493688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-07-11 23:40:54.493713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2739190 with addr=10.0.0.2, port=4420 00:27:33.813 [2024-07-11 23:40:54.493730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2739190 is same with the state(5) to be set 00:27:33.813 [2024-07-11 23:40:54.493879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-07-11 23:40:54.494085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.813 [2024-07-11 23:40:54.494110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x258df60 with addr=10.0.0.2, port=4420 00:27:33.813 [2024-07-11 23:40:54.494126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258df60 is same with the state(5) to be set 00:27:33.813 [2024-07-11 23:40:54.494146] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:33.813 [2024-07-11 23:40:54.494161] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:33.813 [2024-07-11 23:40:54.494175] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:33.813 [2024-07-11 23:40:54.495023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.495048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.495073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.495089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.495107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.495122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.495151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.495169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.495185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.495201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.495217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.495231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.495248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.495262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.495279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.495294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.495310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.495325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.495341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.813 [2024-07-11 23:40:54.495356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.813 [2024-07-11 23:40:54.495373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.495979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.495994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.814 [2024-07-11 23:40:54.496702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.814 [2024-07-11 23:40:54.496719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.496734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.496751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.496766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.496782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.496797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.496812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.496827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.496844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.496858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.496875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.496889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.496905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.496920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.496937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.496952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.496969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.496984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.497001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.497015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.497032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.497050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.497068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.497082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.497098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bd800 is same with the state(5) to be set 00:27:33.815 [2024-07-11 23:40:54.498345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.498979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.498996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.499011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.499027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.499041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.499058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.499073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.499093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.499108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.499125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.499146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.499164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.499180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.499196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.499211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.499227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.499242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.499259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.499274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.815 [2024-07-11 23:40:54.499290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.815 [2024-07-11 23:40:54.499305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.499974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.499990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.500006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.500023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.500038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.500055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.500069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.500086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.500101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.500118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.500133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.500157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.500172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.500189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.500204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.500221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.500236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.500253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.500269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.500286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.500304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.500321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.500337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.500354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.500369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.500386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.500401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.500416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bede0 is same with the state(5) to be set 00:27:33.816 [2024-07-11 23:40:54.501642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.501666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.501693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.501709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.501726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.501742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.501759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.501773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.501790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.501805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.501820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.501835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.501852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.501867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.816 [2024-07-11 23:40:54.501884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.816 [2024-07-11 23:40:54.501898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.501914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.501930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.501952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.501967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.501984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.501999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.502981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.502996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.503013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.503028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.503045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.817 [2024-07-11 23:40:54.503060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.817 [2024-07-11 23:40:54.503077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.503704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.503719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c03c0 is same with the state(5) to be set 00:27:33.818 [2024-07-11 23:40:54.504954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.504978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.818 [2024-07-11 23:40:54.505680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.818 [2024-07-11 23:40:54.505697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.505711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.505728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.505742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.505759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.505774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.505790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.505805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.505822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.505837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.505853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.505868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.505884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.505898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.505915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.505930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.505946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.505961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.505979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.505994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.506971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.506989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.507003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.507020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.819 [2024-07-11 23:40:54.507034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.819 [2024-07-11 23:40:54.507050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621d90 is same with the state(5) to be set 00:27:33.819 [2024-07-11 23:40:54.509213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.820 [2024-07-11 23:40:54.509243] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:33.820 [2024-07-11 23:40:54.509267] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:33.820 [2024-07-11 23:40:54.509286] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:33.820 task offset: 40320 on job bdev=Nvme10n1 fails 00:27:33.820 00:27:33.820 Latency(us) 00:27:33.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.820 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.820 Job: Nvme1n1 ended in about 0.92 seconds with error 00:27:33.820 Verification LBA range: start 0x0 length 0x400 00:27:33.820 Nvme1n1 : 0.92 316.14 19.76 69.53 0.00 165084.26 81167.55 156121.32 00:27:33.820 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.820 Job: Nvme2n1 ended in about 0.92 seconds with error 00:27:33.820 Verification LBA range: start 0x0 length 0x400 00:27:33.820 Nvme2n1 : 0.92 314.99 19.69 69.28 0.00 164339.26 94760.20 132042.90 00:27:33.820 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.820 Job: Nvme3n1 ended in about 0.89 seconds with error 00:27:33.820 Verification LBA range: start 0x0 length 0x400 00:27:33.820 Nvme3n1 : 0.89 361.30 22.58 40.27 0.00 155560.92 4611.79 134373.07 00:27:33.820 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.820 Job: Nvme4n1 ended in about 0.93 seconds with error 00:27:33.820 Verification LBA range: start 0x0 length 0x400 00:27:33.820 Nvme4n1 : 0.93 313.78 19.61 69.01 0.00 162347.13 84662.80 146800.64 00:27:33.820 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.820 Job: Nvme5n1 ended in about 0.93 seconds with error 00:27:33.820 Verification LBA range: start 0x0 length 0x400 00:27:33.820 Nvme5n1 : 0.93 311.60 19.48 68.53 0.00 162210.39 86992.97 144470.47 00:27:33.820 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.820 Job: Nvme6n1 ended in about 0.94 seconds with error 00:27:33.820 Verification LBA range: start 0x0 length 0x400 00:27:33.820 Nvme6n1 : 0.94 310.51 19.41 68.29 0.00 161434.76 86992.97 139033.41 00:27:33.820 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.820 Job: Nvme7n1 ended in about 0.94 seconds with error 00:27:33.820 Verification LBA range: start 0x0 length 0x400 00:27:33.820 Nvme7n1 : 0.94 309.43 19.34 68.05 0.00 160722.58 98255.45 132819.63 00:27:33.820 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.820 Job: Nvme8n1 ended in about 0.92 seconds with error 00:27:33.820 Verification LBA range: start 0x0 length 0x400 00:27:33.820 Nvme8n1 : 0.92 317.48 19.84 69.82 0.00 155048.67 43690.67 127382.57 00:27:33.820 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.820 Job: Nvme9n1 ended in about 0.94 seconds with error 00:27:33.820 Verification LBA range: start 0x0 length 0x400 00:27:33.820 Nvme9n1 : 0.94 315.75 19.73 67.81 0.00 155663.22 13592.65 127382.57 00:27:33.820 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.820 Job: Nvme10n1 ended in about 0.89 seconds with error 00:27:33.820 Verification LBA range: start 0x0 length 0x400 00:27:33.820 Nvme10n1 : 0.89 327.89 20.49 72.11 0.00 147291.36 5679.79 128159.29 00:27:33.820 =================================================================================================================== 00:27:33.820 Total : 3198.87 199.93 662.71 0.00 158959.92 4611.79 156121.32 00:27:33.820 [2024-07-11 23:40:54.539327] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:33.820 [2024-07-11 23:40:54.539742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-07-11 23:40:54.539966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-07-11 23:40:54.539994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2640f60 with addr=10.0.0.2, port=4420 00:27:33.820 [2024-07-11 23:40:54.540028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2640f60 is same with the state(5) to be set 00:27:33.820 [2024-07-11 23:40:54.540058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2572670 (9): Bad file descriptor 00:27:33.820 [2024-07-11 23:40:54.540084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2739190 (9): Bad file descriptor 00:27:33.820 [2024-07-11 23:40:54.540102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x258df60 (9): Bad file descriptor 00:27:33.820 [2024-07-11 23:40:54.540183] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.820 [2024-07-11 23:40:54.540222] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.820 [2024-07-11 23:40:54.540245] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.820 [2024-07-11 23:40:54.540264] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.820 [2024-07-11 23:40:54.540285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2640f60 (9): Bad file descriptor 00:27:33.820 [2024-07-11 23:40:54.540462] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:33.820 [2024-07-11 23:40:54.540776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-07-11 23:40:54.540997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-07-11 23:40:54.541024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2596130 with addr=10.0.0.2, port=4420 00:27:33.820 [2024-07-11 23:40:54.541041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2596130 is same with the state(5) to be set 00:27:33.820 [2024-07-11 23:40:54.541297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-07-11 23:40:54.541509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-07-11 23:40:54.541535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2594110 with addr=10.0.0.2, port=4420 00:27:33.820 [2024-07-11 23:40:54.541552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2594110 is same with the state(5) to be set 00:27:33.820 [2024-07-11 23:40:54.541758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-07-11 23:40:54.542011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-07-11 23:40:54.542037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26392c0 with addr=10.0.0.2, port=4420 00:27:33.820 [2024-07-11 23:40:54.542053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26392c0 is same with the state(5) to be set 00:27:33.820 [2024-07-11 23:40:54.542071] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.820 [2024-07-11 23:40:54.542086] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.820 [2024-07-11 23:40:54.542103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.820 [2024-07-11 23:40:54.542123] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:33.820 [2024-07-11 23:40:54.542144] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:33.820 [2024-07-11 23:40:54.542164] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:33.820 [2024-07-11 23:40:54.542181] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:33.820 [2024-07-11 23:40:54.542196] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:33.820 [2024-07-11 23:40:54.542208] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:33.820 [2024-07-11 23:40:54.542254] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.820 [2024-07-11 23:40:54.542278] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.820 [2024-07-11 23:40:54.542298] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.820 [2024-07-11 23:40:54.542317] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.820 [2024-07-11 23:40:54.542336] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.820 [2024-07-11 23:40:54.542354] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.820 [2024-07-11 23:40:54.543429] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:33.820 [2024-07-11 23:40:54.543459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:33.820 [2024-07-11 23:40:54.543493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.820 [2024-07-11 23:40:54.543525] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.820 [2024-07-11 23:40:54.543538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.820 [2024-07-11 23:40:54.543807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-07-11 23:40:54.544033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-07-11 23:40:54.544061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2640b30 with addr=10.0.0.2, port=4420 00:27:33.820 [2024-07-11 23:40:54.544078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2640b30 is same with the state(5) to be set 00:27:33.820 [2024-07-11 23:40:54.544097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2596130 (9): Bad file descriptor 00:27:33.820 [2024-07-11 23:40:54.544117] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2594110 (9): Bad file descriptor 00:27:33.820 [2024-07-11 23:40:54.544135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26392c0 (9): Bad file descriptor 00:27:33.820 [2024-07-11 23:40:54.544176] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:33.820 [2024-07-11 23:40:54.544199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:33.820 [2024-07-11 23:40:54.544212] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:33.820 [2024-07-11 23:40:54.544298] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.820 [2024-07-11 23:40:54.544557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-07-11 23:40:54.544786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-07-11 23:40:54.544813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2741940 with addr=10.0.0.2, port=4420 00:27:33.820 [2024-07-11 23:40:54.544830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2741940 is same with the state(5) to be set 00:27:33.820 [2024-07-11 23:40:54.545055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-07-11 23:40:54.545220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.820 [2024-07-11 23:40:54.545248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25753d0 with addr=10.0.0.2, port=4420 00:27:33.820 [2024-07-11 23:40:54.545264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25753d0 is same with the state(5) to be set 00:27:33.820 [2024-07-11 23:40:54.545283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2640b30 (9): Bad file descriptor 00:27:33.821 [2024-07-11 23:40:54.545301] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:33.821 [2024-07-11 23:40:54.545314] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:33.821 [2024-07-11 23:40:54.545327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:33.821 [2024-07-11 23:40:54.545346] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:33.821 [2024-07-11 23:40:54.545360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:33.821 [2024-07-11 23:40:54.545383] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:33.821 [2024-07-11 23:40:54.545399] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:33.821 [2024-07-11 23:40:54.545413] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:33.821 [2024-07-11 23:40:54.545426] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:33.821 [2024-07-11 23:40:54.545499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.821 [2024-07-11 23:40:54.545520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.821 [2024-07-11 23:40:54.545532] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.821 [2024-07-11 23:40:54.545548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2741940 (9): Bad file descriptor 00:27:33.821 [2024-07-11 23:40:54.545568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25753d0 (9): Bad file descriptor 00:27:33.821 [2024-07-11 23:40:54.545584] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:33.821 [2024-07-11 23:40:54.545597] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:33.821 [2024-07-11 23:40:54.545610] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:33.821 [2024-07-11 23:40:54.545665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.821 [2024-07-11 23:40:54.545687] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:33.821 [2024-07-11 23:40:54.545701] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:33.821 [2024-07-11 23:40:54.545714] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:33.821 [2024-07-11 23:40:54.545730] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:33.821 [2024-07-11 23:40:54.545744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:33.821 [2024-07-11 23:40:54.545757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:33.821 [2024-07-11 23:40:54.545794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.821 [2024-07-11 23:40:54.545812] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.080 23:40:55 -- target/shutdown.sh@135 -- # nvmfpid= 00:27:34.080 23:40:55 -- target/shutdown.sh@138 -- # sleep 1 00:27:35.458 23:40:56 -- target/shutdown.sh@141 -- # kill -9 334160 00:27:35.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (334160) - No such process 00:27:35.458 23:40:56 -- target/shutdown.sh@141 -- # true 00:27:35.458 23:40:56 -- target/shutdown.sh@143 -- # stoptarget 00:27:35.458 23:40:56 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:35.458 23:40:56 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:35.458 23:40:56 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:35.458 23:40:56 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:35.458 23:40:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:35.458 23:40:56 -- nvmf/common.sh@116 -- # sync 00:27:35.458 23:40:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:35.458 23:40:56 -- nvmf/common.sh@119 -- # set +e 00:27:35.458 23:40:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:35.458 23:40:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:35.458 rmmod nvme_tcp 00:27:35.458 rmmod nvme_fabrics 00:27:35.458 rmmod nvme_keyring 00:27:35.458 23:40:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:35.458 23:40:56 -- nvmf/common.sh@123 -- # set -e 00:27:35.458 23:40:56 -- nvmf/common.sh@124 -- # return 0 00:27:35.458 23:40:56 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:27:35.458 23:40:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:35.458 23:40:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:35.458 23:40:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:35.458 23:40:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:35.458 23:40:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:35.458 23:40:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.458 23:40:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:35.458 23:40:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.363 23:40:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:37.363 00:27:37.363 real 0m8.102s 00:27:37.363 user 0m21.557s 00:27:37.363 sys 0m1.811s 00:27:37.363 23:40:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.363 23:40:58 -- common/autotest_common.sh@10 -- # set +x 00:27:37.363 ************************************ 00:27:37.363 END TEST nvmf_shutdown_tc3 00:27:37.363 ************************************ 00:27:37.363 23:40:58 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:27:37.363 00:27:37.363 real 0m30.341s 00:27:37.363 user 1m28.374s 00:27:37.363 sys 0m7.488s 00:27:37.363 23:40:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.363 23:40:58 -- common/autotest_common.sh@10 -- # set +x 00:27:37.363 ************************************ 00:27:37.363 END TEST nvmf_shutdown 00:27:37.363 ************************************ 00:27:37.363 23:40:58 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:37.363 23:40:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:37.363 23:40:58 -- common/autotest_common.sh@10 -- # set +x 00:27:37.363 23:40:58 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:37.363 23:40:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:37.363 23:40:58 -- common/autotest_common.sh@10 -- # set +x 00:27:37.363 23:40:58 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:37.363 23:40:58 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:37.363 23:40:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:37.363 23:40:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:37.363 23:40:58 -- common/autotest_common.sh@10 -- # set +x 00:27:37.363 ************************************ 00:27:37.363 START TEST nvmf_multicontroller 00:27:37.363 ************************************ 00:27:37.363 23:40:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:37.363 * Looking for test storage... 00:27:37.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:37.363 23:40:58 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.363 23:40:58 -- nvmf/common.sh@7 -- # uname -s 00:27:37.363 23:40:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.363 23:40:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.363 23:40:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.363 23:40:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.363 23:40:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.363 23:40:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.622 23:40:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.622 23:40:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.622 23:40:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.622 23:40:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.622 23:40:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:37.622 23:40:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:37.622 23:40:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.622 23:40:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.622 23:40:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.622 23:40:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.622 23:40:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.622 23:40:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.622 23:40:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.622 23:40:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.622 23:40:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.622 23:40:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.622 23:40:58 -- paths/export.sh@5 -- # export PATH 00:27:37.622 23:40:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.622 23:40:58 -- nvmf/common.sh@46 -- # : 0 00:27:37.622 23:40:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:37.622 23:40:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:37.622 23:40:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:37.622 23:40:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.622 23:40:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.622 23:40:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:37.622 23:40:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:37.622 23:40:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:37.622 23:40:58 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:37.622 23:40:58 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:37.622 23:40:58 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:37.622 23:40:58 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:37.622 23:40:58 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:37.622 23:40:58 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:37.622 23:40:58 -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:37.622 23:40:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:37.622 23:40:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.622 23:40:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:37.622 23:40:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:37.622 23:40:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:37.622 23:40:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.622 23:40:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.622 23:40:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.622 23:40:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:37.622 23:40:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:37.622 23:40:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:37.622 23:40:58 -- common/autotest_common.sh@10 -- # set +x 00:27:40.154 23:41:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:40.154 23:41:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:40.154 23:41:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:40.154 23:41:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:40.154 23:41:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:40.154 23:41:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:40.154 23:41:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:40.154 23:41:00 -- nvmf/common.sh@294 -- # net_devs=() 00:27:40.154 23:41:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:40.154 23:41:00 -- nvmf/common.sh@295 -- # e810=() 00:27:40.154 23:41:00 -- nvmf/common.sh@295 -- # local -ga e810 00:27:40.154 23:41:00 -- nvmf/common.sh@296 -- # x722=() 00:27:40.155 23:41:00 -- nvmf/common.sh@296 -- # local -ga x722 00:27:40.155 23:41:00 -- nvmf/common.sh@297 -- # mlx=() 00:27:40.155 23:41:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:40.155 23:41:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.155 23:41:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.155 23:41:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.155 23:41:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.155 23:41:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.155 23:41:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.155 23:41:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.155 23:41:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.155 23:41:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.155 23:41:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.155 23:41:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.155 23:41:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:40.155 23:41:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:40.155 23:41:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:40.155 23:41:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:40.155 23:41:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:40.155 23:41:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:40.155 23:41:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:40.155 23:41:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:40.155 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:40.155 23:41:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:40.155 23:41:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:40.155 23:41:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.155 23:41:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.155 23:41:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:40.155 23:41:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:40.155 23:41:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:40.155 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:40.155 23:41:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:40.155 23:41:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:40.155 23:41:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.155 23:41:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.155 23:41:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:40.155 23:41:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:40.155 23:41:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:40.155 23:41:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:40.155 23:41:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:40.155 23:41:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.155 23:41:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:40.155 23:41:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.155 23:41:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:40.155 Found net devices under 0000:84:00.0: cvl_0_0 00:27:40.155 23:41:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.155 23:41:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:40.155 23:41:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.155 23:41:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:40.155 23:41:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.155 23:41:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:40.155 Found net devices under 0000:84:00.1: cvl_0_1 00:27:40.155 23:41:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.155 23:41:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:40.155 23:41:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:40.155 23:41:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:40.155 23:41:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:40.155 23:41:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:40.155 23:41:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.155 23:41:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.155 23:41:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.155 23:41:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:40.155 23:41:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.155 23:41:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.155 23:41:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:40.155 23:41:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.155 23:41:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.155 23:41:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:40.155 23:41:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:40.155 23:41:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.155 23:41:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.155 23:41:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.155 23:41:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.155 23:41:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:40.155 23:41:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.412 23:41:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.412 23:41:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.412 23:41:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:40.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:27:40.413 00:27:40.413 --- 10.0.0.2 ping statistics --- 00:27:40.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.413 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:27:40.413 23:41:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:27:40.413 00:27:40.413 --- 10.0.0.1 ping statistics --- 00:27:40.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.413 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:27:40.413 23:41:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.413 23:41:01 -- nvmf/common.sh@410 -- # return 0 00:27:40.413 23:41:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:40.413 23:41:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.413 23:41:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:40.413 23:41:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:40.413 23:41:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.413 23:41:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:40.413 23:41:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:40.413 23:41:01 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:40.413 23:41:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:40.413 23:41:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:40.413 23:41:01 -- common/autotest_common.sh@10 -- # set +x 00:27:40.413 23:41:01 -- nvmf/common.sh@469 -- # nvmfpid=336764 00:27:40.413 23:41:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:40.413 23:41:01 -- nvmf/common.sh@470 -- # waitforlisten 336764 00:27:40.413 23:41:01 -- common/autotest_common.sh@819 -- # '[' -z 336764 ']' 00:27:40.413 23:41:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.413 23:41:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:40.413 23:41:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.413 23:41:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:40.413 23:41:01 -- common/autotest_common.sh@10 -- # set +x 00:27:40.413 [2024-07-11 23:41:01.271911] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:40.413 [2024-07-11 23:41:01.272077] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.413 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.670 [2024-07-11 23:41:01.396396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:40.670 [2024-07-11 23:41:01.502565] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:40.670 [2024-07-11 23:41:01.502740] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.670 [2024-07-11 23:41:01.502761] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.670 [2024-07-11 23:41:01.502775] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.670 [2024-07-11 23:41:01.502913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.670 [2024-07-11 23:41:01.502965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.670 [2024-07-11 23:41:01.502968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.604 23:41:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:41.604 23:41:02 -- common/autotest_common.sh@852 -- # return 0 00:27:41.604 23:41:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:41.604 23:41:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:41.604 23:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:41.604 23:41:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.604 23:41:02 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:41.604 23:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.604 23:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:41.604 [2024-07-11 23:41:02.379646] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.604 23:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.604 23:41:02 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:41.604 23:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.604 23:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:41.604 Malloc0 00:27:41.604 23:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.604 23:41:02 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:41.604 23:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.604 23:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:41.604 23:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.604 23:41:02 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:41.604 23:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.604 23:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:41.604 23:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.604 23:41:02 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:41.604 23:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.604 23:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:41.604 [2024-07-11 23:41:02.447021] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.604 23:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.604 23:41:02 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:41.604 23:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.605 23:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:41.605 [2024-07-11 23:41:02.454873] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:41.605 23:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.605 23:41:02 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:41.605 23:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.605 23:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:41.605 Malloc1 00:27:41.605 23:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.605 23:41:02 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:41.605 23:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.605 23:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:41.605 23:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.605 23:41:02 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:41.605 23:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.605 23:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:41.605 23:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.605 23:41:02 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:41.605 23:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.605 23:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:41.605 23:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.605 23:41:02 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:41.605 23:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.605 23:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:41.605 23:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.605 23:41:02 -- host/multicontroller.sh@44 -- # bdevperf_pid=337001 00:27:41.605 23:41:02 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:41.605 23:41:02 -- host/multicontroller.sh@47 -- # waitforlisten 337001 /var/tmp/bdevperf.sock 00:27:41.605 23:41:02 -- common/autotest_common.sh@819 -- # '[' -z 337001 ']' 00:27:41.605 23:41:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:41.605 23:41:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:41.605 23:41:02 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:41.605 23:41:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:41.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:41.605 23:41:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:41.606 23:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:42.171 23:41:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:42.171 23:41:02 -- common/autotest_common.sh@852 -- # return 0 00:27:42.171 23:41:02 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:42.171 23:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.171 23:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:42.171 NVMe0n1 00:27:42.171 23:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:42.171 23:41:02 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:42.171 23:41:02 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:42.171 23:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.171 23:41:03 -- common/autotest_common.sh@10 -- # set +x 00:27:42.171 23:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:42.171 1 00:27:42.171 23:41:03 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:42.171 23:41:03 -- common/autotest_common.sh@640 -- # local es=0 00:27:42.171 23:41:03 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:42.171 23:41:03 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:42.171 23:41:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.171 23:41:03 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:42.171 23:41:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.171 23:41:03 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:42.171 23:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.171 23:41:03 -- common/autotest_common.sh@10 -- # set +x 00:27:42.171 request: 00:27:42.171 { 00:27:42.171 "name": "NVMe0", 00:27:42.171 "trtype": "tcp", 00:27:42.171 "traddr": "10.0.0.2", 00:27:42.171 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:42.171 "hostaddr": "10.0.0.2", 00:27:42.171 "hostsvcid": "60000", 00:27:42.171 "adrfam": "ipv4", 00:27:42.171 "trsvcid": "4420", 00:27:42.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:42.171 "method": "bdev_nvme_attach_controller", 00:27:42.172 "req_id": 1 00:27:42.172 } 00:27:42.172 Got JSON-RPC error response 00:27:42.172 response: 00:27:42.172 { 00:27:42.172 "code": -114, 00:27:42.172 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:42.172 } 00:27:42.172 23:41:03 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:42.172 23:41:03 -- common/autotest_common.sh@643 -- # es=1 00:27:42.172 23:41:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:42.172 23:41:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:42.172 23:41:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:42.172 23:41:03 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:42.172 23:41:03 -- common/autotest_common.sh@640 -- # local es=0 00:27:42.172 23:41:03 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:42.172 23:41:03 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:42.172 23:41:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.172 23:41:03 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:42.172 23:41:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.172 23:41:03 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:42.172 23:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.172 23:41:03 -- common/autotest_common.sh@10 -- # set +x 00:27:42.172 request: 00:27:42.172 { 00:27:42.172 "name": "NVMe0", 00:27:42.172 "trtype": "tcp", 00:27:42.172 "traddr": "10.0.0.2", 00:27:42.172 "hostaddr": "10.0.0.2", 00:27:42.172 "hostsvcid": "60000", 00:27:42.172 "adrfam": "ipv4", 00:27:42.172 "trsvcid": "4420", 00:27:42.172 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:42.172 "method": "bdev_nvme_attach_controller", 00:27:42.172 "req_id": 1 00:27:42.172 } 00:27:42.172 Got JSON-RPC error response 00:27:42.172 response: 00:27:42.172 { 00:27:42.172 "code": -114, 00:27:42.172 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:42.172 } 00:27:42.172 23:41:03 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:42.172 23:41:03 -- common/autotest_common.sh@643 -- # es=1 00:27:42.172 23:41:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:42.172 23:41:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:42.172 23:41:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:42.172 23:41:03 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:42.172 23:41:03 -- common/autotest_common.sh@640 -- # local es=0 00:27:42.172 23:41:03 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:42.172 23:41:03 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:42.172 23:41:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.172 23:41:03 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:42.172 23:41:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.172 23:41:03 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:42.172 23:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.172 23:41:03 -- common/autotest_common.sh@10 -- # set +x 00:27:42.172 request: 00:27:42.172 { 00:27:42.172 "name": "NVMe0", 00:27:42.172 "trtype": "tcp", 00:27:42.172 "traddr": "10.0.0.2", 00:27:42.172 "hostaddr": "10.0.0.2", 00:27:42.172 "hostsvcid": "60000", 00:27:42.172 "adrfam": "ipv4", 00:27:42.172 "trsvcid": "4420", 00:27:42.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:42.172 "multipath": "disable", 00:27:42.172 "method": "bdev_nvme_attach_controller", 00:27:42.172 "req_id": 1 00:27:42.172 } 00:27:42.172 Got JSON-RPC error response 00:27:42.172 response: 00:27:42.172 { 00:27:42.172 "code": -114, 00:27:42.172 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:42.172 } 00:27:42.172 23:41:03 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:42.172 23:41:03 -- common/autotest_common.sh@643 -- # es=1 00:27:42.172 23:41:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:42.172 23:41:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:42.172 23:41:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:42.172 23:41:03 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:42.172 23:41:03 -- common/autotest_common.sh@640 -- # local es=0 00:27:42.172 23:41:03 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:42.172 23:41:03 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:42.172 23:41:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.172 23:41:03 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:42.172 23:41:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.172 23:41:03 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:42.172 23:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.172 23:41:03 -- common/autotest_common.sh@10 -- # set +x 00:27:42.172 request: 00:27:42.172 { 00:27:42.172 "name": "NVMe0", 00:27:42.172 "trtype": "tcp", 00:27:42.172 "traddr": "10.0.0.2", 00:27:42.172 "hostaddr": "10.0.0.2", 00:27:42.172 "hostsvcid": "60000", 00:27:42.172 "adrfam": "ipv4", 00:27:42.172 "trsvcid": "4420", 00:27:42.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:42.172 "multipath": "failover", 00:27:42.172 "method": "bdev_nvme_attach_controller", 00:27:42.172 "req_id": 1 00:27:42.172 } 00:27:42.172 Got JSON-RPC error response 00:27:42.172 response: 00:27:42.172 { 00:27:42.172 "code": -114, 00:27:42.172 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:42.172 } 00:27:42.172 23:41:03 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:42.172 23:41:03 -- common/autotest_common.sh@643 -- # es=1 00:27:42.172 23:41:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:42.172 23:41:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:42.172 23:41:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:42.172 23:41:03 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:42.172 23:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.172 23:41:03 -- common/autotest_common.sh@10 -- # set +x 00:27:42.429 00:27:42.429 23:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:42.429 23:41:03 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:42.429 23:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.429 23:41:03 -- common/autotest_common.sh@10 -- # set +x 00:27:42.429 23:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:42.429 23:41:03 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:42.429 23:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.429 23:41:03 -- common/autotest_common.sh@10 -- # set +x 00:27:42.429 00:27:42.429 23:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:42.429 23:41:03 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:42.429 23:41:03 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:42.429 23:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.429 23:41:03 -- common/autotest_common.sh@10 -- # set +x 00:27:42.429 23:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:42.429 23:41:03 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:42.429 23:41:03 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:43.849 0 00:27:43.849 23:41:04 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:43.849 23:41:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.849 23:41:04 -- common/autotest_common.sh@10 -- # set +x 00:27:43.849 23:41:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.849 23:41:04 -- host/multicontroller.sh@100 -- # killprocess 337001 00:27:43.849 23:41:04 -- common/autotest_common.sh@926 -- # '[' -z 337001 ']' 00:27:43.849 23:41:04 -- common/autotest_common.sh@930 -- # kill -0 337001 00:27:43.849 23:41:04 -- common/autotest_common.sh@931 -- # uname 00:27:43.849 23:41:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:43.849 23:41:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 337001 00:27:43.849 23:41:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:43.849 23:41:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:43.849 23:41:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 337001' 00:27:43.849 killing process with pid 337001 00:27:43.849 23:41:04 -- common/autotest_common.sh@945 -- # kill 337001 00:27:43.849 23:41:04 -- common/autotest_common.sh@950 -- # wait 337001 00:27:43.849 23:41:04 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:43.849 23:41:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.849 23:41:04 -- common/autotest_common.sh@10 -- # set +x 00:27:43.849 23:41:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.849 23:41:04 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:43.849 23:41:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.849 23:41:04 -- common/autotest_common.sh@10 -- # set +x 00:27:43.849 23:41:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.849 23:41:04 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:43.849 23:41:04 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:43.849 23:41:04 -- common/autotest_common.sh@1597 -- # read -r file 00:27:43.849 23:41:04 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:43.849 23:41:04 -- common/autotest_common.sh@1596 -- # sort -u 00:27:43.849 23:41:04 -- common/autotest_common.sh@1598 -- # cat 00:27:43.849 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:43.849 [2024-07-11 23:41:02.562098] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:43.849 [2024-07-11 23:41:02.562213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337001 ] 00:27:43.849 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.849 [2024-07-11 23:41:02.629330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.849 [2024-07-11 23:41:02.714370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.849 [2024-07-11 23:41:03.297956] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name d07cd29c-ac7d-406d-a4ff-94c07b6ea0dd already exists 00:27:43.849 [2024-07-11 23:41:03.297999] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:d07cd29c-ac7d-406d-a4ff-94c07b6ea0dd alias for bdev NVMe1n1 00:27:43.849 [2024-07-11 23:41:03.298017] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:43.849 Running I/O for 1 seconds... 00:27:43.849 00:27:43.849 Latency(us) 00:27:43.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.849 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:43.850 NVMe0n1 : 1.00 19754.50 77.17 0.00 0.00 6462.14 2512.21 9951.76 00:27:43.850 =================================================================================================================== 00:27:43.850 Total : 19754.50 77.17 0.00 0.00 6462.14 2512.21 9951.76 00:27:43.850 Received shutdown signal, test time was about 1.000000 seconds 00:27:43.850 00:27:43.850 Latency(us) 00:27:43.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.850 =================================================================================================================== 00:27:43.850 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:43.850 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:43.850 23:41:04 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:43.850 23:41:04 -- common/autotest_common.sh@1597 -- # read -r file 00:27:43.850 23:41:04 -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:43.850 23:41:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:43.850 23:41:04 -- nvmf/common.sh@116 -- # sync 00:27:43.850 23:41:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:43.850 23:41:04 -- nvmf/common.sh@119 -- # set +e 00:27:43.850 23:41:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:43.850 23:41:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:43.850 rmmod nvme_tcp 00:27:43.850 rmmod nvme_fabrics 00:27:44.108 rmmod nvme_keyring 00:27:44.108 23:41:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:44.108 23:41:04 -- nvmf/common.sh@123 -- # set -e 00:27:44.108 23:41:04 -- nvmf/common.sh@124 -- # return 0 00:27:44.108 23:41:04 -- nvmf/common.sh@477 -- # '[' -n 336764 ']' 00:27:44.108 23:41:04 -- nvmf/common.sh@478 -- # killprocess 336764 00:27:44.108 23:41:04 -- common/autotest_common.sh@926 -- # '[' -z 336764 ']' 00:27:44.108 23:41:04 -- common/autotest_common.sh@930 -- # kill -0 336764 00:27:44.108 23:41:04 -- common/autotest_common.sh@931 -- # uname 00:27:44.108 23:41:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:44.108 23:41:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 336764 00:27:44.108 23:41:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:44.108 23:41:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:44.108 23:41:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 336764' 00:27:44.108 killing process with pid 336764 00:27:44.108 23:41:04 -- common/autotest_common.sh@945 -- # kill 336764 00:27:44.108 23:41:04 -- common/autotest_common.sh@950 -- # wait 336764 00:27:44.369 23:41:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:44.369 23:41:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:44.369 23:41:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:44.369 23:41:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:44.369 23:41:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:44.369 23:41:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.369 23:41:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.369 23:41:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.900 23:41:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:46.900 00:27:46.900 real 0m8.989s 00:27:46.900 user 0m14.427s 00:27:46.900 sys 0m3.078s 00:27:46.900 23:41:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.900 23:41:07 -- common/autotest_common.sh@10 -- # set +x 00:27:46.900 ************************************ 00:27:46.900 END TEST nvmf_multicontroller 00:27:46.900 ************************************ 00:27:46.900 23:41:07 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:46.900 23:41:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:46.900 23:41:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:46.900 23:41:07 -- common/autotest_common.sh@10 -- # set +x 00:27:46.900 ************************************ 00:27:46.900 START TEST nvmf_aer 00:27:46.900 ************************************ 00:27:46.900 23:41:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:46.900 * Looking for test storage... 00:27:46.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:46.900 23:41:07 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.900 23:41:07 -- nvmf/common.sh@7 -- # uname -s 00:27:46.900 23:41:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.900 23:41:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.900 23:41:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.900 23:41:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.900 23:41:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.900 23:41:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.900 23:41:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.900 23:41:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.900 23:41:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.900 23:41:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.900 23:41:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:46.900 23:41:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:46.900 23:41:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.900 23:41:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.900 23:41:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.900 23:41:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.900 23:41:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.900 23:41:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.900 23:41:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.900 23:41:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.900 23:41:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.900 23:41:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.900 23:41:07 -- paths/export.sh@5 -- # export PATH 00:27:46.900 23:41:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.900 23:41:07 -- nvmf/common.sh@46 -- # : 0 00:27:46.900 23:41:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:46.900 23:41:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:46.900 23:41:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:46.900 23:41:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.900 23:41:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.900 23:41:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:46.900 23:41:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:46.900 23:41:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:46.900 23:41:07 -- host/aer.sh@11 -- # nvmftestinit 00:27:46.900 23:41:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:46.900 23:41:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.900 23:41:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:46.900 23:41:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:46.900 23:41:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:46.900 23:41:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.900 23:41:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.900 23:41:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.900 23:41:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:46.901 23:41:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:46.901 23:41:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:46.901 23:41:07 -- common/autotest_common.sh@10 -- # set +x 00:27:49.436 23:41:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:49.436 23:41:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:49.436 23:41:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:49.436 23:41:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:49.436 23:41:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:49.436 23:41:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:49.436 23:41:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:49.436 23:41:09 -- nvmf/common.sh@294 -- # net_devs=() 00:27:49.436 23:41:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:49.436 23:41:09 -- nvmf/common.sh@295 -- # e810=() 00:27:49.436 23:41:09 -- nvmf/common.sh@295 -- # local -ga e810 00:27:49.436 23:41:09 -- nvmf/common.sh@296 -- # x722=() 00:27:49.436 23:41:09 -- nvmf/common.sh@296 -- # local -ga x722 00:27:49.436 23:41:09 -- nvmf/common.sh@297 -- # mlx=() 00:27:49.436 23:41:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:49.436 23:41:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.436 23:41:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.436 23:41:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.436 23:41:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.436 23:41:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.436 23:41:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.436 23:41:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.436 23:41:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.436 23:41:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.436 23:41:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.436 23:41:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.436 23:41:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:49.436 23:41:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:49.436 23:41:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:49.436 23:41:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:49.436 23:41:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:49.436 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:49.436 23:41:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:49.436 23:41:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:49.436 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:49.436 23:41:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:49.436 23:41:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:49.436 23:41:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.436 23:41:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:49.436 23:41:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.436 23:41:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:49.436 Found net devices under 0000:84:00.0: cvl_0_0 00:27:49.436 23:41:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.436 23:41:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:49.436 23:41:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.436 23:41:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:49.436 23:41:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.436 23:41:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:49.436 Found net devices under 0000:84:00.1: cvl_0_1 00:27:49.436 23:41:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.436 23:41:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:49.436 23:41:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:49.436 23:41:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:49.436 23:41:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:49.436 23:41:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.436 23:41:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.436 23:41:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.436 23:41:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:49.436 23:41:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:49.436 23:41:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:49.436 23:41:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:49.436 23:41:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:49.436 23:41:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.436 23:41:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:49.436 23:41:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:49.436 23:41:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:49.436 23:41:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:49.436 23:41:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:49.436 23:41:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:49.436 23:41:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:49.436 23:41:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:49.436 23:41:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:49.436 23:41:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:49.436 23:41:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:49.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:27:49.436 00:27:49.436 --- 10.0.0.2 ping statistics --- 00:27:49.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.436 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:27:49.436 23:41:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:49.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:27:49.436 00:27:49.436 --- 10.0.0.1 ping statistics --- 00:27:49.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.436 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:27:49.436 23:41:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.436 23:41:10 -- nvmf/common.sh@410 -- # return 0 00:27:49.436 23:41:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:49.436 23:41:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.436 23:41:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:49.436 23:41:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:49.436 23:41:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.436 23:41:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:49.436 23:41:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:49.436 23:41:10 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:49.436 23:41:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:49.436 23:41:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:49.436 23:41:10 -- common/autotest_common.sh@10 -- # set +x 00:27:49.436 23:41:10 -- nvmf/common.sh@469 -- # nvmfpid=339257 00:27:49.436 23:41:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:49.436 23:41:10 -- nvmf/common.sh@470 -- # waitforlisten 339257 00:27:49.436 23:41:10 -- common/autotest_common.sh@819 -- # '[' -z 339257 ']' 00:27:49.436 23:41:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.437 23:41:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:49.437 23:41:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.437 23:41:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:49.437 23:41:10 -- common/autotest_common.sh@10 -- # set +x 00:27:49.437 [2024-07-11 23:41:10.220370] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:49.437 [2024-07-11 23:41:10.220450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.437 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.437 [2024-07-11 23:41:10.300894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:49.695 [2024-07-11 23:41:10.399742] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:49.695 [2024-07-11 23:41:10.399908] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.695 [2024-07-11 23:41:10.399928] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.695 [2024-07-11 23:41:10.399948] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.695 [2024-07-11 23:41:10.400006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.695 [2024-07-11 23:41:10.400087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:49.695 [2024-07-11 23:41:10.400089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.695 [2024-07-11 23:41:10.400034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:50.628 23:41:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:50.628 23:41:11 -- common/autotest_common.sh@852 -- # return 0 00:27:50.628 23:41:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:50.628 23:41:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:50.628 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:50.628 23:41:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.628 23:41:11 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:50.628 23:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.628 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:50.628 [2024-07-11 23:41:11.283930] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.628 23:41:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.628 23:41:11 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:50.628 23:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.628 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:50.628 Malloc0 00:27:50.628 23:41:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.628 23:41:11 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:50.628 23:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.628 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:50.628 23:41:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.628 23:41:11 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:50.628 23:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.628 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:50.628 23:41:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.628 23:41:11 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.628 23:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.628 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:50.628 [2024-07-11 23:41:11.337530] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.628 23:41:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.628 23:41:11 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:50.628 23:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.628 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:50.628 [2024-07-11 23:41:11.345204] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:50.628 [ 00:27:50.628 { 00:27:50.628 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:50.628 "subtype": "Discovery", 00:27:50.628 "listen_addresses": [], 00:27:50.628 "allow_any_host": true, 00:27:50.628 "hosts": [] 00:27:50.628 }, 00:27:50.628 { 00:27:50.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:50.628 "subtype": "NVMe", 00:27:50.628 "listen_addresses": [ 00:27:50.628 { 00:27:50.628 "transport": "TCP", 00:27:50.628 "trtype": "TCP", 00:27:50.628 "adrfam": "IPv4", 00:27:50.628 "traddr": "10.0.0.2", 00:27:50.628 "trsvcid": "4420" 00:27:50.628 } 00:27:50.628 ], 00:27:50.628 "allow_any_host": true, 00:27:50.628 "hosts": [], 00:27:50.628 "serial_number": "SPDK00000000000001", 00:27:50.628 "model_number": "SPDK bdev Controller", 00:27:50.628 "max_namespaces": 2, 00:27:50.628 "min_cntlid": 1, 00:27:50.628 "max_cntlid": 65519, 00:27:50.628 "namespaces": [ 00:27:50.628 { 00:27:50.628 "nsid": 1, 00:27:50.628 "bdev_name": "Malloc0", 00:27:50.628 "name": "Malloc0", 00:27:50.628 "nguid": "013AB5B3EDB648398DCFCD70C3DEDEE3", 00:27:50.628 "uuid": "013ab5b3-edb6-4839-8dcf-cd70c3dedee3" 00:27:50.628 } 00:27:50.628 ] 00:27:50.628 } 00:27:50.628 ] 00:27:50.628 23:41:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.628 23:41:11 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:50.628 23:41:11 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:50.628 23:41:11 -- host/aer.sh@33 -- # aerpid=339413 00:27:50.628 23:41:11 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:50.628 23:41:11 -- common/autotest_common.sh@1244 -- # local i=0 00:27:50.628 23:41:11 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:50.628 23:41:11 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:27:50.629 23:41:11 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:50.629 23:41:11 -- common/autotest_common.sh@1247 -- # i=1 00:27:50.629 23:41:11 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:50.629 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.629 23:41:11 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:50.629 23:41:11 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:27:50.629 23:41:11 -- common/autotest_common.sh@1247 -- # i=2 00:27:50.629 23:41:11 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:50.629 23:41:11 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:50.629 23:41:11 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:27:50.629 23:41:11 -- common/autotest_common.sh@1247 -- # i=3 00:27:50.629 23:41:11 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:50.887 23:41:11 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:50.887 23:41:11 -- common/autotest_common.sh@1246 -- # '[' 3 -lt 200 ']' 00:27:50.887 23:41:11 -- common/autotest_common.sh@1247 -- # i=4 00:27:50.887 23:41:11 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:50.887 23:41:11 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:50.887 23:41:11 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:50.887 23:41:11 -- common/autotest_common.sh@1255 -- # return 0 00:27:50.887 23:41:11 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:50.887 23:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.887 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:50.887 Malloc1 00:27:50.887 23:41:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.887 23:41:11 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:50.887 23:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.887 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:50.887 23:41:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.887 23:41:11 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:50.887 23:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.887 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:51.145 Asynchronous Event Request test 00:27:51.145 Attaching to 10.0.0.2 00:27:51.145 Attached to 10.0.0.2 00:27:51.145 Registering asynchronous event callbacks... 00:27:51.145 Starting namespace attribute notice tests for all controllers... 00:27:51.145 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:51.145 aer_cb - Changed Namespace 00:27:51.145 Cleaning up... 00:27:51.145 [ 00:27:51.145 { 00:27:51.145 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:51.145 "subtype": "Discovery", 00:27:51.145 "listen_addresses": [], 00:27:51.145 "allow_any_host": true, 00:27:51.145 "hosts": [] 00:27:51.145 }, 00:27:51.145 { 00:27:51.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:51.145 "subtype": "NVMe", 00:27:51.145 "listen_addresses": [ 00:27:51.145 { 00:27:51.146 "transport": "TCP", 00:27:51.146 "trtype": "TCP", 00:27:51.146 "adrfam": "IPv4", 00:27:51.146 "traddr": "10.0.0.2", 00:27:51.146 "trsvcid": "4420" 00:27:51.146 } 00:27:51.146 ], 00:27:51.146 "allow_any_host": true, 00:27:51.146 "hosts": [], 00:27:51.146 "serial_number": "SPDK00000000000001", 00:27:51.146 "model_number": "SPDK bdev Controller", 00:27:51.146 "max_namespaces": 2, 00:27:51.146 "min_cntlid": 1, 00:27:51.146 "max_cntlid": 65519, 00:27:51.146 "namespaces": [ 00:27:51.146 { 00:27:51.146 "nsid": 1, 00:27:51.146 "bdev_name": "Malloc0", 00:27:51.146 "name": "Malloc0", 00:27:51.146 "nguid": "013AB5B3EDB648398DCFCD70C3DEDEE3", 00:27:51.146 "uuid": "013ab5b3-edb6-4839-8dcf-cd70c3dedee3" 00:27:51.146 }, 00:27:51.146 { 00:27:51.146 "nsid": 2, 00:27:51.146 "bdev_name": "Malloc1", 00:27:51.146 "name": "Malloc1", 00:27:51.146 "nguid": "C14719DBEFEA4E09AE9A326BC16EAE17", 00:27:51.146 "uuid": "c14719db-efea-4e09-ae9a-326bc16eae17" 00:27:51.146 } 00:27:51.146 ] 00:27:51.146 } 00:27:51.146 ] 00:27:51.146 23:41:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.146 23:41:11 -- host/aer.sh@43 -- # wait 339413 00:27:51.146 23:41:11 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:51.146 23:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.146 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:51.146 23:41:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.146 23:41:11 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:51.146 23:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.146 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:51.146 23:41:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.146 23:41:11 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:51.146 23:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.146 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:51.146 23:41:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.146 23:41:11 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:51.146 23:41:11 -- host/aer.sh@51 -- # nvmftestfini 00:27:51.146 23:41:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:51.146 23:41:11 -- nvmf/common.sh@116 -- # sync 00:27:51.146 23:41:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:51.146 23:41:11 -- nvmf/common.sh@119 -- # set +e 00:27:51.146 23:41:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:51.146 23:41:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:51.146 rmmod nvme_tcp 00:27:51.146 rmmod nvme_fabrics 00:27:51.146 rmmod nvme_keyring 00:27:51.146 23:41:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:51.146 23:41:11 -- nvmf/common.sh@123 -- # set -e 00:27:51.146 23:41:11 -- nvmf/common.sh@124 -- # return 0 00:27:51.146 23:41:11 -- nvmf/common.sh@477 -- # '[' -n 339257 ']' 00:27:51.146 23:41:11 -- nvmf/common.sh@478 -- # killprocess 339257 00:27:51.146 23:41:11 -- common/autotest_common.sh@926 -- # '[' -z 339257 ']' 00:27:51.146 23:41:11 -- common/autotest_common.sh@930 -- # kill -0 339257 00:27:51.146 23:41:11 -- common/autotest_common.sh@931 -- # uname 00:27:51.146 23:41:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:51.146 23:41:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 339257 00:27:51.146 23:41:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:51.146 23:41:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:51.146 23:41:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 339257' 00:27:51.146 killing process with pid 339257 00:27:51.146 23:41:12 -- common/autotest_common.sh@945 -- # kill 339257 00:27:51.146 [2024-07-11 23:41:12.022087] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:51.146 23:41:12 -- common/autotest_common.sh@950 -- # wait 339257 00:27:51.405 23:41:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:51.405 23:41:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:51.405 23:41:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:51.405 23:41:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:51.405 23:41:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:51.405 23:41:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.405 23:41:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.405 23:41:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.993 23:41:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:53.993 00:27:53.993 real 0m7.049s 00:27:53.993 user 0m8.370s 00:27:53.993 sys 0m2.685s 00:27:53.993 23:41:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:53.993 23:41:14 -- common/autotest_common.sh@10 -- # set +x 00:27:53.993 ************************************ 00:27:53.993 END TEST nvmf_aer 00:27:53.993 ************************************ 00:27:53.993 23:41:14 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:53.993 23:41:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:53.993 23:41:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:53.993 23:41:14 -- common/autotest_common.sh@10 -- # set +x 00:27:53.993 ************************************ 00:27:53.993 START TEST nvmf_async_init 00:27:53.993 ************************************ 00:27:53.993 23:41:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:53.993 * Looking for test storage... 00:27:53.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:53.993 23:41:14 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.993 23:41:14 -- nvmf/common.sh@7 -- # uname -s 00:27:53.993 23:41:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.993 23:41:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.993 23:41:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.993 23:41:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.993 23:41:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.993 23:41:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.993 23:41:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.993 23:41:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.993 23:41:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.993 23:41:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.993 23:41:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:53.993 23:41:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:53.993 23:41:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.993 23:41:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.993 23:41:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.994 23:41:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.994 23:41:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.994 23:41:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.994 23:41:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.994 23:41:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.994 23:41:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.994 23:41:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.994 23:41:14 -- paths/export.sh@5 -- # export PATH 00:27:53.994 23:41:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.994 23:41:14 -- nvmf/common.sh@46 -- # : 0 00:27:53.994 23:41:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:53.994 23:41:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:53.994 23:41:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:53.994 23:41:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.994 23:41:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.994 23:41:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:53.994 23:41:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:53.994 23:41:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:53.994 23:41:14 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:53.994 23:41:14 -- host/async_init.sh@14 -- # null_block_size=512 00:27:53.994 23:41:14 -- host/async_init.sh@15 -- # null_bdev=null0 00:27:53.994 23:41:14 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:53.994 23:41:14 -- host/async_init.sh@20 -- # uuidgen 00:27:53.994 23:41:14 -- host/async_init.sh@20 -- # tr -d - 00:27:53.994 23:41:14 -- host/async_init.sh@20 -- # nguid=5243ff15f98040668312d98b424f18dc 00:27:53.994 23:41:14 -- host/async_init.sh@22 -- # nvmftestinit 00:27:53.994 23:41:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:53.994 23:41:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.994 23:41:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:53.994 23:41:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:53.994 23:41:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:53.994 23:41:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.994 23:41:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.994 23:41:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.994 23:41:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:53.994 23:41:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:53.994 23:41:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:53.994 23:41:14 -- common/autotest_common.sh@10 -- # set +x 00:27:56.530 23:41:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:56.530 23:41:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:56.530 23:41:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:56.530 23:41:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:56.530 23:41:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:56.530 23:41:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:56.530 23:41:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:56.530 23:41:16 -- nvmf/common.sh@294 -- # net_devs=() 00:27:56.530 23:41:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:56.530 23:41:16 -- nvmf/common.sh@295 -- # e810=() 00:27:56.530 23:41:16 -- nvmf/common.sh@295 -- # local -ga e810 00:27:56.530 23:41:16 -- nvmf/common.sh@296 -- # x722=() 00:27:56.530 23:41:16 -- nvmf/common.sh@296 -- # local -ga x722 00:27:56.530 23:41:16 -- nvmf/common.sh@297 -- # mlx=() 00:27:56.530 23:41:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:56.530 23:41:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.530 23:41:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.530 23:41:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.530 23:41:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.530 23:41:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.530 23:41:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.530 23:41:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.530 23:41:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.530 23:41:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.530 23:41:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.530 23:41:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.530 23:41:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:56.530 23:41:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:56.530 23:41:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:56.530 23:41:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:56.530 23:41:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:56.530 23:41:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:56.530 23:41:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:56.530 23:41:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:56.530 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:56.530 23:41:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:56.530 23:41:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:56.530 23:41:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.530 23:41:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.530 23:41:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:56.530 23:41:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:56.530 23:41:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:56.530 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:56.530 23:41:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:56.530 23:41:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:56.530 23:41:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.530 23:41:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.530 23:41:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:56.530 23:41:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:56.530 23:41:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:56.530 23:41:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:56.530 23:41:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:56.530 23:41:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.530 23:41:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:56.530 23:41:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.530 23:41:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:56.530 Found net devices under 0000:84:00.0: cvl_0_0 00:27:56.530 23:41:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.530 23:41:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:56.530 23:41:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.530 23:41:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:56.530 23:41:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.530 23:41:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:56.530 Found net devices under 0000:84:00.1: cvl_0_1 00:27:56.530 23:41:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.530 23:41:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:56.530 23:41:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:56.530 23:41:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:56.530 23:41:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:56.530 23:41:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:56.530 23:41:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.530 23:41:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.530 23:41:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.530 23:41:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:56.530 23:41:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.530 23:41:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.530 23:41:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:56.530 23:41:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.530 23:41:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.530 23:41:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:56.530 23:41:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:56.530 23:41:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.530 23:41:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.530 23:41:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.530 23:41:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.530 23:41:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:56.530 23:41:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.530 23:41:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:56.530 23:41:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:56.530 23:41:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:56.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:27:56.530 00:27:56.530 --- 10.0.0.2 ping statistics --- 00:27:56.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.530 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:27:56.530 23:41:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:56.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:27:56.531 00:27:56.531 --- 10.0.0.1 ping statistics --- 00:27:56.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.531 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:27:56.531 23:41:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.531 23:41:17 -- nvmf/common.sh@410 -- # return 0 00:27:56.531 23:41:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:56.531 23:41:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.531 23:41:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:56.531 23:41:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:56.531 23:41:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.531 23:41:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:56.531 23:41:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:56.531 23:41:17 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:56.531 23:41:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:56.531 23:41:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:56.531 23:41:17 -- common/autotest_common.sh@10 -- # set +x 00:27:56.531 23:41:17 -- nvmf/common.sh@469 -- # nvmfpid=341510 00:27:56.531 23:41:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:56.531 23:41:17 -- nvmf/common.sh@470 -- # waitforlisten 341510 00:27:56.531 23:41:17 -- common/autotest_common.sh@819 -- # '[' -z 341510 ']' 00:27:56.531 23:41:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.531 23:41:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:56.531 23:41:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.531 23:41:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:56.531 23:41:17 -- common/autotest_common.sh@10 -- # set +x 00:27:56.531 [2024-07-11 23:41:17.278263] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:56.531 [2024-07-11 23:41:17.278349] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.531 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.531 [2024-07-11 23:41:17.386390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.531 [2024-07-11 23:41:17.479369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:56.531 [2024-07-11 23:41:17.479520] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.531 [2024-07-11 23:41:17.479540] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.531 [2024-07-11 23:41:17.479554] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.531 [2024-07-11 23:41:17.479589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.466 23:41:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:57.466 23:41:18 -- common/autotest_common.sh@852 -- # return 0 00:27:57.466 23:41:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:57.466 23:41:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:57.466 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.466 23:41:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.466 23:41:18 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:57.466 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.466 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.466 [2024-07-11 23:41:18.385490] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.466 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.466 23:41:18 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:57.466 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.466 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.466 null0 00:27:57.466 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.466 23:41:18 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:57.466 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.466 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.466 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.466 23:41:18 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:57.466 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.466 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.466 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.466 23:41:18 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5243ff15f98040668312d98b424f18dc 00:27:57.466 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.466 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.724 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.724 23:41:18 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:57.724 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.724 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.724 [2024-07-11 23:41:18.425745] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.724 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.724 23:41:18 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:57.724 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.724 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.724 nvme0n1 00:27:57.724 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.724 23:41:18 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:57.724 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.724 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.724 [ 00:27:57.724 { 00:27:57.724 "name": "nvme0n1", 00:27:57.724 "aliases": [ 00:27:57.724 "5243ff15-f980-4066-8312-d98b424f18dc" 00:27:57.724 ], 00:27:57.724 "product_name": "NVMe disk", 00:27:57.724 "block_size": 512, 00:27:57.724 "num_blocks": 2097152, 00:27:57.724 "uuid": "5243ff15-f980-4066-8312-d98b424f18dc", 00:27:57.724 "assigned_rate_limits": { 00:27:57.724 "rw_ios_per_sec": 0, 00:27:57.724 "rw_mbytes_per_sec": 0, 00:27:57.724 "r_mbytes_per_sec": 0, 00:27:57.724 "w_mbytes_per_sec": 0 00:27:57.724 }, 00:27:57.724 "claimed": false, 00:27:57.724 "zoned": false, 00:27:57.724 "supported_io_types": { 00:27:57.724 "read": true, 00:27:57.724 "write": true, 00:27:57.724 "unmap": false, 00:27:57.724 "write_zeroes": true, 00:27:57.724 "flush": true, 00:27:57.724 "reset": true, 00:27:57.724 "compare": true, 00:27:57.724 "compare_and_write": true, 00:27:57.724 "abort": true, 00:27:57.724 "nvme_admin": true, 00:27:57.724 "nvme_io": true 00:27:57.724 }, 00:27:57.724 "driver_specific": { 00:27:57.724 "nvme": [ 00:27:57.724 { 00:27:57.724 "trid": { 00:27:57.724 "trtype": "TCP", 00:27:57.724 "adrfam": "IPv4", 00:27:57.724 "traddr": "10.0.0.2", 00:27:57.724 "trsvcid": "4420", 00:27:57.724 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:57.724 }, 00:27:57.724 "ctrlr_data": { 00:27:57.724 "cntlid": 1, 00:27:57.724 "vendor_id": "0x8086", 00:27:57.724 "model_number": "SPDK bdev Controller", 00:27:57.724 "serial_number": "00000000000000000000", 00:27:57.724 "firmware_revision": "24.01.1", 00:27:57.724 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:57.724 "oacs": { 00:27:57.724 "security": 0, 00:27:57.724 "format": 0, 00:27:57.724 "firmware": 0, 00:27:57.724 "ns_manage": 0 00:27:57.724 }, 00:27:57.724 "multi_ctrlr": true, 00:27:57.724 "ana_reporting": false 00:27:57.724 }, 00:27:57.724 "vs": { 00:27:57.724 "nvme_version": "1.3" 00:27:57.724 }, 00:27:57.724 "ns_data": { 00:27:57.724 "id": 1, 00:27:57.724 "can_share": true 00:27:57.724 } 00:27:57.724 } 00:27:57.724 ], 00:27:57.724 "mp_policy": "active_passive" 00:27:57.724 } 00:27:57.724 } 00:27:57.724 ] 00:27:57.724 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.724 23:41:18 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:57.724 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.724 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.724 [2024-07-11 23:41:18.674362] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:57.724 [2024-07-11 23:41:18.674449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e96e0 (9): Bad file descriptor 00:27:57.982 [2024-07-11 23:41:18.806290] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:57.982 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.982 23:41:18 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:57.982 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.982 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.982 [ 00:27:57.982 { 00:27:57.982 "name": "nvme0n1", 00:27:57.982 "aliases": [ 00:27:57.982 "5243ff15-f980-4066-8312-d98b424f18dc" 00:27:57.982 ], 00:27:57.982 "product_name": "NVMe disk", 00:27:57.982 "block_size": 512, 00:27:57.982 "num_blocks": 2097152, 00:27:57.982 "uuid": "5243ff15-f980-4066-8312-d98b424f18dc", 00:27:57.982 "assigned_rate_limits": { 00:27:57.982 "rw_ios_per_sec": 0, 00:27:57.982 "rw_mbytes_per_sec": 0, 00:27:57.982 "r_mbytes_per_sec": 0, 00:27:57.982 "w_mbytes_per_sec": 0 00:27:57.982 }, 00:27:57.982 "claimed": false, 00:27:57.982 "zoned": false, 00:27:57.982 "supported_io_types": { 00:27:57.982 "read": true, 00:27:57.982 "write": true, 00:27:57.982 "unmap": false, 00:27:57.982 "write_zeroes": true, 00:27:57.982 "flush": true, 00:27:57.982 "reset": true, 00:27:57.982 "compare": true, 00:27:57.982 "compare_and_write": true, 00:27:57.982 "abort": true, 00:27:57.982 "nvme_admin": true, 00:27:57.982 "nvme_io": true 00:27:57.982 }, 00:27:57.982 "driver_specific": { 00:27:57.982 "nvme": [ 00:27:57.982 { 00:27:57.982 "trid": { 00:27:57.982 "trtype": "TCP", 00:27:57.982 "adrfam": "IPv4", 00:27:57.982 "traddr": "10.0.0.2", 00:27:57.982 "trsvcid": "4420", 00:27:57.982 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:57.982 }, 00:27:57.982 "ctrlr_data": { 00:27:57.982 "cntlid": 2, 00:27:57.982 "vendor_id": "0x8086", 00:27:57.982 "model_number": "SPDK bdev Controller", 00:27:57.982 "serial_number": "00000000000000000000", 00:27:57.982 "firmware_revision": "24.01.1", 00:27:57.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:57.982 "oacs": { 00:27:57.982 "security": 0, 00:27:57.982 "format": 0, 00:27:57.982 "firmware": 0, 00:27:57.982 "ns_manage": 0 00:27:57.982 }, 00:27:57.982 "multi_ctrlr": true, 00:27:57.982 "ana_reporting": false 00:27:57.982 }, 00:27:57.982 "vs": { 00:27:57.982 "nvme_version": "1.3" 00:27:57.982 }, 00:27:57.982 "ns_data": { 00:27:57.982 "id": 1, 00:27:57.982 "can_share": true 00:27:57.982 } 00:27:57.982 } 00:27:57.982 ], 00:27:57.982 "mp_policy": "active_passive" 00:27:57.982 } 00:27:57.982 } 00:27:57.982 ] 00:27:57.982 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.982 23:41:18 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.982 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.982 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.982 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.982 23:41:18 -- host/async_init.sh@53 -- # mktemp 00:27:57.982 23:41:18 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.KReOkOTrc1 00:27:57.983 23:41:18 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:57.983 23:41:18 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.KReOkOTrc1 00:27:57.983 23:41:18 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:57.983 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.983 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.983 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.983 23:41:18 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:57.983 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.983 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.983 [2024-07-11 23:41:18.850943] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:57.983 [2024-07-11 23:41:18.851076] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:57.983 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.983 23:41:18 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KReOkOTrc1 00:27:57.983 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.983 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.983 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.983 23:41:18 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KReOkOTrc1 00:27:57.983 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.983 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.983 [2024-07-11 23:41:18.866979] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:58.241 nvme0n1 00:27:58.241 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:58.241 23:41:18 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:58.241 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:58.241 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:58.241 [ 00:27:58.241 { 00:27:58.241 "name": "nvme0n1", 00:27:58.241 "aliases": [ 00:27:58.241 "5243ff15-f980-4066-8312-d98b424f18dc" 00:27:58.241 ], 00:27:58.241 "product_name": "NVMe disk", 00:27:58.241 "block_size": 512, 00:27:58.241 "num_blocks": 2097152, 00:27:58.241 "uuid": "5243ff15-f980-4066-8312-d98b424f18dc", 00:27:58.241 "assigned_rate_limits": { 00:27:58.241 "rw_ios_per_sec": 0, 00:27:58.241 "rw_mbytes_per_sec": 0, 00:27:58.241 "r_mbytes_per_sec": 0, 00:27:58.241 "w_mbytes_per_sec": 0 00:27:58.241 }, 00:27:58.241 "claimed": false, 00:27:58.241 "zoned": false, 00:27:58.241 "supported_io_types": { 00:27:58.241 "read": true, 00:27:58.241 "write": true, 00:27:58.241 "unmap": false, 00:27:58.241 "write_zeroes": true, 00:27:58.241 "flush": true, 00:27:58.241 "reset": true, 00:27:58.241 "compare": true, 00:27:58.241 "compare_and_write": true, 00:27:58.241 "abort": true, 00:27:58.241 "nvme_admin": true, 00:27:58.241 "nvme_io": true 00:27:58.241 }, 00:27:58.241 "driver_specific": { 00:27:58.241 "nvme": [ 00:27:58.241 { 00:27:58.241 "trid": { 00:27:58.241 "trtype": "TCP", 00:27:58.241 "adrfam": "IPv4", 00:27:58.241 "traddr": "10.0.0.2", 00:27:58.241 "trsvcid": "4421", 00:27:58.241 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:58.241 }, 00:27:58.241 "ctrlr_data": { 00:27:58.241 "cntlid": 3, 00:27:58.241 "vendor_id": "0x8086", 00:27:58.241 "model_number": "SPDK bdev Controller", 00:27:58.241 "serial_number": "00000000000000000000", 00:27:58.241 "firmware_revision": "24.01.1", 00:27:58.241 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:58.241 "oacs": { 00:27:58.241 "security": 0, 00:27:58.241 "format": 0, 00:27:58.241 "firmware": 0, 00:27:58.241 "ns_manage": 0 00:27:58.241 }, 00:27:58.241 "multi_ctrlr": true, 00:27:58.241 "ana_reporting": false 00:27:58.241 }, 00:27:58.241 "vs": { 00:27:58.241 "nvme_version": "1.3" 00:27:58.241 }, 00:27:58.242 "ns_data": { 00:27:58.242 "id": 1, 00:27:58.242 "can_share": true 00:27:58.242 } 00:27:58.242 } 00:27:58.242 ], 00:27:58.242 "mp_policy": "active_passive" 00:27:58.242 } 00:27:58.242 } 00:27:58.242 ] 00:27:58.242 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:58.242 23:41:18 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.242 23:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:58.242 23:41:18 -- common/autotest_common.sh@10 -- # set +x 00:27:58.242 23:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:58.242 23:41:18 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.KReOkOTrc1 00:27:58.242 23:41:18 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:58.242 23:41:18 -- host/async_init.sh@78 -- # nvmftestfini 00:27:58.242 23:41:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:58.242 23:41:18 -- nvmf/common.sh@116 -- # sync 00:27:58.242 23:41:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:58.242 23:41:18 -- nvmf/common.sh@119 -- # set +e 00:27:58.242 23:41:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:58.242 23:41:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:58.242 rmmod nvme_tcp 00:27:58.242 rmmod nvme_fabrics 00:27:58.242 rmmod nvme_keyring 00:27:58.242 23:41:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:58.242 23:41:19 -- nvmf/common.sh@123 -- # set -e 00:27:58.242 23:41:19 -- nvmf/common.sh@124 -- # return 0 00:27:58.242 23:41:19 -- nvmf/common.sh@477 -- # '[' -n 341510 ']' 00:27:58.242 23:41:19 -- nvmf/common.sh@478 -- # killprocess 341510 00:27:58.242 23:41:19 -- common/autotest_common.sh@926 -- # '[' -z 341510 ']' 00:27:58.242 23:41:19 -- common/autotest_common.sh@930 -- # kill -0 341510 00:27:58.242 23:41:19 -- common/autotest_common.sh@931 -- # uname 00:27:58.242 23:41:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:58.242 23:41:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 341510 00:27:58.242 23:41:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:58.242 23:41:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:58.242 23:41:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 341510' 00:27:58.242 killing process with pid 341510 00:27:58.242 23:41:19 -- common/autotest_common.sh@945 -- # kill 341510 00:27:58.242 23:41:19 -- common/autotest_common.sh@950 -- # wait 341510 00:27:58.502 23:41:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:58.502 23:41:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:58.502 23:41:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:58.502 23:41:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:58.502 23:41:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:58.502 23:41:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.502 23:41:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.502 23:41:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.411 23:41:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:00.411 00:28:00.411 real 0m6.954s 00:28:00.411 user 0m3.252s 00:28:00.411 sys 0m2.486s 00:28:00.411 23:41:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.411 23:41:21 -- common/autotest_common.sh@10 -- # set +x 00:28:00.411 ************************************ 00:28:00.411 END TEST nvmf_async_init 00:28:00.411 ************************************ 00:28:00.411 23:41:21 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:00.411 23:41:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:00.411 23:41:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:00.411 23:41:21 -- common/autotest_common.sh@10 -- # set +x 00:28:00.411 ************************************ 00:28:00.411 START TEST dma 00:28:00.411 ************************************ 00:28:00.411 23:41:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:00.691 * Looking for test storage... 00:28:00.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:00.691 23:41:21 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.691 23:41:21 -- nvmf/common.sh@7 -- # uname -s 00:28:00.691 23:41:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.691 23:41:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.691 23:41:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.691 23:41:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.691 23:41:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.691 23:41:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.691 23:41:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.691 23:41:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.691 23:41:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.691 23:41:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.691 23:41:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:00.691 23:41:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:00.691 23:41:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.691 23:41:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.691 23:41:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.691 23:41:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.691 23:41:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.691 23:41:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.691 23:41:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.691 23:41:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.691 23:41:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.691 23:41:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.691 23:41:21 -- paths/export.sh@5 -- # export PATH 00:28:00.691 23:41:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.691 23:41:21 -- nvmf/common.sh@46 -- # : 0 00:28:00.691 23:41:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:00.691 23:41:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:00.691 23:41:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:00.691 23:41:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.691 23:41:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.691 23:41:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:00.691 23:41:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:00.691 23:41:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:00.691 23:41:21 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:00.691 23:41:21 -- host/dma.sh@13 -- # exit 0 00:28:00.691 00:28:00.691 real 0m0.082s 00:28:00.691 user 0m0.033s 00:28:00.691 sys 0m0.055s 00:28:00.691 23:41:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.691 23:41:21 -- common/autotest_common.sh@10 -- # set +x 00:28:00.691 ************************************ 00:28:00.691 END TEST dma 00:28:00.691 ************************************ 00:28:00.691 23:41:21 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:00.691 23:41:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:00.691 23:41:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:00.691 23:41:21 -- common/autotest_common.sh@10 -- # set +x 00:28:00.691 ************************************ 00:28:00.691 START TEST nvmf_identify 00:28:00.691 ************************************ 00:28:00.692 23:41:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:00.692 * Looking for test storage... 00:28:00.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:00.692 23:41:21 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.692 23:41:21 -- nvmf/common.sh@7 -- # uname -s 00:28:00.692 23:41:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.692 23:41:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.692 23:41:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.692 23:41:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.692 23:41:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.692 23:41:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.692 23:41:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.692 23:41:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.692 23:41:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.692 23:41:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.692 23:41:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:00.692 23:41:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:00.692 23:41:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.692 23:41:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.692 23:41:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.692 23:41:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.692 23:41:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.692 23:41:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.692 23:41:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.692 23:41:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.692 23:41:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.692 23:41:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.692 23:41:21 -- paths/export.sh@5 -- # export PATH 00:28:00.692 23:41:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.692 23:41:21 -- nvmf/common.sh@46 -- # : 0 00:28:00.692 23:41:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:00.692 23:41:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:00.692 23:41:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:00.692 23:41:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.692 23:41:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.692 23:41:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:00.692 23:41:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:00.692 23:41:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:00.692 23:41:21 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:00.692 23:41:21 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:00.692 23:41:21 -- host/identify.sh@14 -- # nvmftestinit 00:28:00.692 23:41:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:00.692 23:41:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.692 23:41:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:00.692 23:41:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:00.692 23:41:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:00.692 23:41:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.692 23:41:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.692 23:41:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.692 23:41:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:00.692 23:41:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:00.692 23:41:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:00.692 23:41:21 -- common/autotest_common.sh@10 -- # set +x 00:28:03.230 23:41:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:03.230 23:41:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:03.230 23:41:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:03.230 23:41:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:03.230 23:41:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:03.230 23:41:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:03.230 23:41:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:03.230 23:41:24 -- nvmf/common.sh@294 -- # net_devs=() 00:28:03.230 23:41:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:03.230 23:41:24 -- nvmf/common.sh@295 -- # e810=() 00:28:03.230 23:41:24 -- nvmf/common.sh@295 -- # local -ga e810 00:28:03.230 23:41:24 -- nvmf/common.sh@296 -- # x722=() 00:28:03.230 23:41:24 -- nvmf/common.sh@296 -- # local -ga x722 00:28:03.230 23:41:24 -- nvmf/common.sh@297 -- # mlx=() 00:28:03.230 23:41:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:03.230 23:41:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.230 23:41:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.230 23:41:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.230 23:41:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.230 23:41:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.230 23:41:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.230 23:41:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.230 23:41:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.230 23:41:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.230 23:41:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.230 23:41:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.230 23:41:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:03.230 23:41:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:03.230 23:41:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:03.230 23:41:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:03.230 23:41:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:03.230 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:03.230 23:41:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:03.230 23:41:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:03.230 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:03.230 23:41:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:03.230 23:41:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:03.230 23:41:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.230 23:41:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:03.230 23:41:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.230 23:41:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:03.230 Found net devices under 0000:84:00.0: cvl_0_0 00:28:03.230 23:41:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.230 23:41:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:03.230 23:41:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.230 23:41:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:03.230 23:41:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.230 23:41:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:03.230 Found net devices under 0000:84:00.1: cvl_0_1 00:28:03.230 23:41:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.230 23:41:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:03.230 23:41:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:03.230 23:41:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:03.230 23:41:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:03.230 23:41:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.230 23:41:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.230 23:41:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.230 23:41:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:03.230 23:41:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:03.230 23:41:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:03.230 23:41:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:03.230 23:41:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:03.230 23:41:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.230 23:41:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:03.230 23:41:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:03.230 23:41:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:03.230 23:41:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:03.230 23:41:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:03.230 23:41:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:03.230 23:41:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:03.230 23:41:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.490 23:41:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.490 23:41:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.490 23:41:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:03.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:28:03.490 00:28:03.490 --- 10.0.0.2 ping statistics --- 00:28:03.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.490 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:28:03.490 23:41:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:28:03.490 00:28:03.490 --- 10.0.0.1 ping statistics --- 00:28:03.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.490 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:28:03.490 23:41:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.490 23:41:24 -- nvmf/common.sh@410 -- # return 0 00:28:03.490 23:41:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:03.490 23:41:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.490 23:41:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:03.490 23:41:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:03.490 23:41:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.490 23:41:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:03.490 23:41:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:03.490 23:41:24 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:03.490 23:41:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:03.490 23:41:24 -- common/autotest_common.sh@10 -- # set +x 00:28:03.490 23:41:24 -- host/identify.sh@19 -- # nvmfpid=343797 00:28:03.490 23:41:24 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:03.490 23:41:24 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:03.490 23:41:24 -- host/identify.sh@23 -- # waitforlisten 343797 00:28:03.490 23:41:24 -- common/autotest_common.sh@819 -- # '[' -z 343797 ']' 00:28:03.490 23:41:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.490 23:41:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:03.490 23:41:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.490 23:41:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:03.490 23:41:24 -- common/autotest_common.sh@10 -- # set +x 00:28:03.490 [2024-07-11 23:41:24.312311] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:03.490 [2024-07-11 23:41:24.312422] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.490 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.490 [2024-07-11 23:41:24.403067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:03.750 [2024-07-11 23:41:24.501837] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:03.750 [2024-07-11 23:41:24.502008] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.750 [2024-07-11 23:41:24.502028] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.750 [2024-07-11 23:41:24.502042] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.750 [2024-07-11 23:41:24.502124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.750 [2024-07-11 23:41:24.502183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.750 [2024-07-11 23:41:24.502210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:03.750 [2024-07-11 23:41:24.502214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.134 23:41:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:05.134 23:41:25 -- common/autotest_common.sh@852 -- # return 0 00:28:05.134 23:41:25 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:05.134 23:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.134 23:41:25 -- common/autotest_common.sh@10 -- # set +x 00:28:05.134 [2024-07-11 23:41:25.686912] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.134 23:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.134 23:41:25 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:05.134 23:41:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:05.134 23:41:25 -- common/autotest_common.sh@10 -- # set +x 00:28:05.134 23:41:25 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:05.134 23:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.134 23:41:25 -- common/autotest_common.sh@10 -- # set +x 00:28:05.134 Malloc0 00:28:05.134 23:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.134 23:41:25 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:05.134 23:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.134 23:41:25 -- common/autotest_common.sh@10 -- # set +x 00:28:05.134 23:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.134 23:41:25 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:05.134 23:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.134 23:41:25 -- common/autotest_common.sh@10 -- # set +x 00:28:05.134 23:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.134 23:41:25 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.134 23:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.134 23:41:25 -- common/autotest_common.sh@10 -- # set +x 00:28:05.134 [2024-07-11 23:41:25.768890] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.134 23:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.134 23:41:25 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:05.134 23:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.134 23:41:25 -- common/autotest_common.sh@10 -- # set +x 00:28:05.134 23:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.134 23:41:25 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:05.134 23:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.134 23:41:25 -- common/autotest_common.sh@10 -- # set +x 00:28:05.134 [2024-07-11 23:41:25.784597] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:05.134 [ 00:28:05.134 { 00:28:05.134 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:05.134 "subtype": "Discovery", 00:28:05.134 "listen_addresses": [ 00:28:05.134 { 00:28:05.134 "transport": "TCP", 00:28:05.134 "trtype": "TCP", 00:28:05.134 "adrfam": "IPv4", 00:28:05.134 "traddr": "10.0.0.2", 00:28:05.134 "trsvcid": "4420" 00:28:05.134 } 00:28:05.134 ], 00:28:05.134 "allow_any_host": true, 00:28:05.134 "hosts": [] 00:28:05.134 }, 00:28:05.134 { 00:28:05.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.134 "subtype": "NVMe", 00:28:05.134 "listen_addresses": [ 00:28:05.134 { 00:28:05.134 "transport": "TCP", 00:28:05.134 "trtype": "TCP", 00:28:05.134 "adrfam": "IPv4", 00:28:05.134 "traddr": "10.0.0.2", 00:28:05.134 "trsvcid": "4420" 00:28:05.134 } 00:28:05.134 ], 00:28:05.134 "allow_any_host": true, 00:28:05.134 "hosts": [], 00:28:05.134 "serial_number": "SPDK00000000000001", 00:28:05.134 "model_number": "SPDK bdev Controller", 00:28:05.134 "max_namespaces": 32, 00:28:05.134 "min_cntlid": 1, 00:28:05.134 "max_cntlid": 65519, 00:28:05.134 "namespaces": [ 00:28:05.134 { 00:28:05.134 "nsid": 1, 00:28:05.134 "bdev_name": "Malloc0", 00:28:05.134 "name": "Malloc0", 00:28:05.134 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:05.134 "eui64": "ABCDEF0123456789", 00:28:05.134 "uuid": "1fdd8f7f-704a-4a81-95f3-2991f9c309c7" 00:28:05.134 } 00:28:05.134 ] 00:28:05.134 } 00:28:05.134 ] 00:28:05.134 23:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.134 23:41:25 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:05.134 [2024-07-11 23:41:25.810406] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:05.134 [2024-07-11 23:41:25.810454] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343955 ] 00:28:05.134 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.134 [2024-07-11 23:41:25.845599] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:05.134 [2024-07-11 23:41:25.845663] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:05.134 [2024-07-11 23:41:25.845673] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:05.134 [2024-07-11 23:41:25.845690] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:05.134 [2024-07-11 23:41:25.845704] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:05.134 [2024-07-11 23:41:25.849229] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:05.134 [2024-07-11 23:41:25.849293] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15d45a0 0 00:28:05.134 [2024-07-11 23:41:25.856161] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:05.134 [2024-07-11 23:41:25.856184] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:05.134 [2024-07-11 23:41:25.856194] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:05.134 [2024-07-11 23:41:25.856200] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:05.134 [2024-07-11 23:41:25.856253] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.134 [2024-07-11 23:41:25.856275] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.134 [2024-07-11 23:41:25.856283] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15d45a0) 00:28:05.134 [2024-07-11 23:41:25.856301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:05.134 [2024-07-11 23:41:25.856328] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f3e0, cid 0, qid 0 00:28:05.134 [2024-07-11 23:41:25.864153] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.134 [2024-07-11 23:41:25.864171] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.134 [2024-07-11 23:41:25.864204] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.134 [2024-07-11 23:41:25.864212] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f3e0) on tqpair=0x15d45a0 00:28:05.134 [2024-07-11 23:41:25.864231] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:05.134 [2024-07-11 23:41:25.864241] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:05.134 [2024-07-11 23:41:25.864252] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:05.134 [2024-07-11 23:41:25.864272] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.134 [2024-07-11 23:41:25.864281] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.134 [2024-07-11 23:41:25.864287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15d45a0) 00:28:05.134 [2024-07-11 23:41:25.864298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.134 [2024-07-11 23:41:25.864322] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f3e0, cid 0, qid 0 00:28:05.134 [2024-07-11 23:41:25.864522] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.134 [2024-07-11 23:41:25.864537] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.134 [2024-07-11 23:41:25.864543] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.134 [2024-07-11 23:41:25.864550] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f3e0) on tqpair=0x15d45a0 00:28:05.134 [2024-07-11 23:41:25.864560] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:05.134 [2024-07-11 23:41:25.864574] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:05.134 [2024-07-11 23:41:25.864586] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.134 [2024-07-11 23:41:25.864593] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.134 [2024-07-11 23:41:25.864599] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15d45a0) 00:28:05.134 [2024-07-11 23:41:25.864610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.134 [2024-07-11 23:41:25.864630] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f3e0, cid 0, qid 0 00:28:05.134 [2024-07-11 23:41:25.864789] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.134 [2024-07-11 23:41:25.864803] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.134 [2024-07-11 23:41:25.864809] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.134 [2024-07-11 23:41:25.864815] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f3e0) on tqpair=0x15d45a0 00:28:05.135 [2024-07-11 23:41:25.864825] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:05.135 [2024-07-11 23:41:25.864840] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:05.135 [2024-07-11 23:41:25.864851] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.864858] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.864864] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15d45a0) 00:28:05.135 [2024-07-11 23:41:25.864874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.135 [2024-07-11 23:41:25.864895] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f3e0, cid 0, qid 0 00:28:05.135 [2024-07-11 23:41:25.865067] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.135 [2024-07-11 23:41:25.865079] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.135 [2024-07-11 23:41:25.865085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.865092] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f3e0) on tqpair=0x15d45a0 00:28:05.135 [2024-07-11 23:41:25.865101] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:05.135 [2024-07-11 23:41:25.865117] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.865125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.865132] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15d45a0) 00:28:05.135 [2024-07-11 23:41:25.865150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.135 [2024-07-11 23:41:25.865188] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f3e0, cid 0, qid 0 00:28:05.135 [2024-07-11 23:41:25.865322] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.135 [2024-07-11 23:41:25.865336] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.135 [2024-07-11 23:41:25.865343] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.865349] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f3e0) on tqpair=0x15d45a0 00:28:05.135 [2024-07-11 23:41:25.865359] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:05.135 [2024-07-11 23:41:25.865368] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:05.135 [2024-07-11 23:41:25.865381] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:05.135 [2024-07-11 23:41:25.865491] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:05.135 [2024-07-11 23:41:25.865500] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:05.135 [2024-07-11 23:41:25.865516] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.865523] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.865529] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15d45a0) 00:28:05.135 [2024-07-11 23:41:25.865539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.135 [2024-07-11 23:41:25.865560] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f3e0, cid 0, qid 0 00:28:05.135 [2024-07-11 23:41:25.865725] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.135 [2024-07-11 23:41:25.865737] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.135 [2024-07-11 23:41:25.865743] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.865749] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f3e0) on tqpair=0x15d45a0 00:28:05.135 [2024-07-11 23:41:25.865759] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:05.135 [2024-07-11 23:41:25.865774] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.865782] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.865788] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15d45a0) 00:28:05.135 [2024-07-11 23:41:25.865798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.135 [2024-07-11 23:41:25.865822] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f3e0, cid 0, qid 0 00:28:05.135 [2024-07-11 23:41:25.865986] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.135 [2024-07-11 23:41:25.866000] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.135 [2024-07-11 23:41:25.866006] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866012] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f3e0) on tqpair=0x15d45a0 00:28:05.135 [2024-07-11 23:41:25.866021] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:05.135 [2024-07-11 23:41:25.866029] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:05.135 [2024-07-11 23:41:25.866042] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:05.135 [2024-07-11 23:41:25.866056] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:05.135 [2024-07-11 23:41:25.866070] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866078] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866084] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15d45a0) 00:28:05.135 [2024-07-11 23:41:25.866094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.135 [2024-07-11 23:41:25.866114] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f3e0, cid 0, qid 0 00:28:05.135 [2024-07-11 23:41:25.866313] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.135 [2024-07-11 23:41:25.866328] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.135 [2024-07-11 23:41:25.866335] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866341] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15d45a0): datao=0, datal=4096, cccid=0 00:28:05.135 [2024-07-11 23:41:25.866349] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163f3e0) on tqpair(0x15d45a0): expected_datao=0, payload_size=4096 00:28:05.135 [2024-07-11 23:41:25.866384] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866395] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866523] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.135 [2024-07-11 23:41:25.866535] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.135 [2024-07-11 23:41:25.866541] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866547] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f3e0) on tqpair=0x15d45a0 00:28:05.135 [2024-07-11 23:41:25.866561] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:05.135 [2024-07-11 23:41:25.866570] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:05.135 [2024-07-11 23:41:25.866577] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:05.135 [2024-07-11 23:41:25.866586] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:05.135 [2024-07-11 23:41:25.866594] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:05.135 [2024-07-11 23:41:25.866602] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:05.135 [2024-07-11 23:41:25.866621] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:05.135 [2024-07-11 23:41:25.866637] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866645] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866651] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15d45a0) 00:28:05.135 [2024-07-11 23:41:25.866661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:05.135 [2024-07-11 23:41:25.866682] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f3e0, cid 0, qid 0 00:28:05.135 [2024-07-11 23:41:25.866860] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.135 [2024-07-11 23:41:25.866871] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.135 [2024-07-11 23:41:25.866877] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866883] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f3e0) on tqpair=0x15d45a0 00:28:05.135 [2024-07-11 23:41:25.866897] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866904] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866910] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15d45a0) 00:28:05.135 [2024-07-11 23:41:25.866919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.135 [2024-07-11 23:41:25.866929] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866935] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866941] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15d45a0) 00:28:05.135 [2024-07-11 23:41:25.866949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.135 [2024-07-11 23:41:25.866958] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866964] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866970] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15d45a0) 00:28:05.135 [2024-07-11 23:41:25.866978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.135 [2024-07-11 23:41:25.866987] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866993] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.866999] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15d45a0) 00:28:05.135 [2024-07-11 23:41:25.867007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.135 [2024-07-11 23:41:25.867015] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:05.135 [2024-07-11 23:41:25.867033] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:05.135 [2024-07-11 23:41:25.867045] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.867051] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.135 [2024-07-11 23:41:25.867057] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15d45a0) 00:28:05.135 [2024-07-11 23:41:25.867066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.135 [2024-07-11 23:41:25.867088] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f3e0, cid 0, qid 0 00:28:05.135 [2024-07-11 23:41:25.867099] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f540, cid 1, qid 0 00:28:05.136 [2024-07-11 23:41:25.867110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f6a0, cid 2, qid 0 00:28:05.136 [2024-07-11 23:41:25.867118] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f800, cid 3, qid 0 00:28:05.136 [2024-07-11 23:41:25.867147] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f960, cid 4, qid 0 00:28:05.136 [2024-07-11 23:41:25.867343] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.136 [2024-07-11 23:41:25.867358] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.136 [2024-07-11 23:41:25.867364] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.867371] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f960) on tqpair=0x15d45a0 00:28:05.136 [2024-07-11 23:41:25.867381] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:05.136 [2024-07-11 23:41:25.867400] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:05.136 [2024-07-11 23:41:25.867432] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.867441] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.867447] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15d45a0) 00:28:05.136 [2024-07-11 23:41:25.867457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.136 [2024-07-11 23:41:25.867479] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f960, cid 4, qid 0 00:28:05.136 [2024-07-11 23:41:25.867668] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.136 [2024-07-11 23:41:25.867679] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.136 [2024-07-11 23:41:25.867685] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.867691] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15d45a0): datao=0, datal=4096, cccid=4 00:28:05.136 [2024-07-11 23:41:25.867698] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163f960) on tqpair(0x15d45a0): expected_datao=0, payload_size=4096 00:28:05.136 [2024-07-11 23:41:25.867727] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.867735] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.912172] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.136 [2024-07-11 23:41:25.912191] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.136 [2024-07-11 23:41:25.912199] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.912205] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f960) on tqpair=0x15d45a0 00:28:05.136 [2024-07-11 23:41:25.912227] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:05.136 [2024-07-11 23:41:25.912268] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.912278] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.912285] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15d45a0) 00:28:05.136 [2024-07-11 23:41:25.912296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.136 [2024-07-11 23:41:25.912307] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.912314] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.912321] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15d45a0) 00:28:05.136 [2024-07-11 23:41:25.912329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.136 [2024-07-11 23:41:25.912357] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f960, cid 4, qid 0 00:28:05.136 [2024-07-11 23:41:25.912372] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163fac0, cid 5, qid 0 00:28:05.136 [2024-07-11 23:41:25.912608] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.136 [2024-07-11 23:41:25.912623] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.136 [2024-07-11 23:41:25.912630] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.912636] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15d45a0): datao=0, datal=1024, cccid=4 00:28:05.136 [2024-07-11 23:41:25.912644] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163f960) on tqpair(0x15d45a0): expected_datao=0, payload_size=1024 00:28:05.136 [2024-07-11 23:41:25.912654] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.912661] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.912670] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.136 [2024-07-11 23:41:25.912678] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.136 [2024-07-11 23:41:25.912684] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.912691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163fac0) on tqpair=0x15d45a0 00:28:05.136 [2024-07-11 23:41:25.953331] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.136 [2024-07-11 23:41:25.953349] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.136 [2024-07-11 23:41:25.953356] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.953363] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f960) on tqpair=0x15d45a0 00:28:05.136 [2024-07-11 23:41:25.953381] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.953390] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.953396] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15d45a0) 00:28:05.136 [2024-07-11 23:41:25.953407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.136 [2024-07-11 23:41:25.953436] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f960, cid 4, qid 0 00:28:05.136 [2024-07-11 23:41:25.953617] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.136 [2024-07-11 23:41:25.953629] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.136 [2024-07-11 23:41:25.953636] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.953642] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15d45a0): datao=0, datal=3072, cccid=4 00:28:05.136 [2024-07-11 23:41:25.953649] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163f960) on tqpair(0x15d45a0): expected_datao=0, payload_size=3072 00:28:05.136 [2024-07-11 23:41:25.953659] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.953666] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.953715] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.136 [2024-07-11 23:41:25.953726] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.136 [2024-07-11 23:41:25.953732] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.953738] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f960) on tqpair=0x15d45a0 00:28:05.136 [2024-07-11 23:41:25.953754] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.953762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.953768] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15d45a0) 00:28:05.136 [2024-07-11 23:41:25.953777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.136 [2024-07-11 23:41:25.953804] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f960, cid 4, qid 0 00:28:05.136 [2024-07-11 23:41:25.953954] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.136 [2024-07-11 23:41:25.953966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.136 [2024-07-11 23:41:25.953972] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.953978] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15d45a0): datao=0, datal=8, cccid=4 00:28:05.136 [2024-07-11 23:41:25.953985] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163f960) on tqpair(0x15d45a0): expected_datao=0, payload_size=8 00:28:05.136 [2024-07-11 23:41:25.953995] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.954002] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.994296] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.136 [2024-07-11 23:41:25.994313] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.136 [2024-07-11 23:41:25.994321] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.136 [2024-07-11 23:41:25.994327] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f960) on tqpair=0x15d45a0 00:28:05.136 ===================================================== 00:28:05.136 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:05.136 ===================================================== 00:28:05.136 Controller Capabilities/Features 00:28:05.136 ================================ 00:28:05.136 Vendor ID: 0000 00:28:05.136 Subsystem Vendor ID: 0000 00:28:05.136 Serial Number: .................... 00:28:05.136 Model Number: ........................................ 00:28:05.136 Firmware Version: 24.01.1 00:28:05.136 Recommended Arb Burst: 0 00:28:05.136 IEEE OUI Identifier: 00 00 00 00:28:05.136 Multi-path I/O 00:28:05.136 May have multiple subsystem ports: No 00:28:05.136 May have multiple controllers: No 00:28:05.136 Associated with SR-IOV VF: No 00:28:05.136 Max Data Transfer Size: 131072 00:28:05.136 Max Number of Namespaces: 0 00:28:05.136 Max Number of I/O Queues: 1024 00:28:05.136 NVMe Specification Version (VS): 1.3 00:28:05.136 NVMe Specification Version (Identify): 1.3 00:28:05.136 Maximum Queue Entries: 128 00:28:05.136 Contiguous Queues Required: Yes 00:28:05.136 Arbitration Mechanisms Supported 00:28:05.136 Weighted Round Robin: Not Supported 00:28:05.136 Vendor Specific: Not Supported 00:28:05.136 Reset Timeout: 15000 ms 00:28:05.136 Doorbell Stride: 4 bytes 00:28:05.136 NVM Subsystem Reset: Not Supported 00:28:05.136 Command Sets Supported 00:28:05.136 NVM Command Set: Supported 00:28:05.136 Boot Partition: Not Supported 00:28:05.136 Memory Page Size Minimum: 4096 bytes 00:28:05.136 Memory Page Size Maximum: 4096 bytes 00:28:05.136 Persistent Memory Region: Not Supported 00:28:05.136 Optional Asynchronous Events Supported 00:28:05.136 Namespace Attribute Notices: Not Supported 00:28:05.136 Firmware Activation Notices: Not Supported 00:28:05.136 ANA Change Notices: Not Supported 00:28:05.136 PLE Aggregate Log Change Notices: Not Supported 00:28:05.136 LBA Status Info Alert Notices: Not Supported 00:28:05.136 EGE Aggregate Log Change Notices: Not Supported 00:28:05.136 Normal NVM Subsystem Shutdown event: Not Supported 00:28:05.136 Zone Descriptor Change Notices: Not Supported 00:28:05.136 Discovery Log Change Notices: Supported 00:28:05.136 Controller Attributes 00:28:05.136 128-bit Host Identifier: Not Supported 00:28:05.137 Non-Operational Permissive Mode: Not Supported 00:28:05.137 NVM Sets: Not Supported 00:28:05.137 Read Recovery Levels: Not Supported 00:28:05.137 Endurance Groups: Not Supported 00:28:05.137 Predictable Latency Mode: Not Supported 00:28:05.137 Traffic Based Keep ALive: Not Supported 00:28:05.137 Namespace Granularity: Not Supported 00:28:05.137 SQ Associations: Not Supported 00:28:05.137 UUID List: Not Supported 00:28:05.137 Multi-Domain Subsystem: Not Supported 00:28:05.137 Fixed Capacity Management: Not Supported 00:28:05.137 Variable Capacity Management: Not Supported 00:28:05.137 Delete Endurance Group: Not Supported 00:28:05.137 Delete NVM Set: Not Supported 00:28:05.137 Extended LBA Formats Supported: Not Supported 00:28:05.137 Flexible Data Placement Supported: Not Supported 00:28:05.137 00:28:05.137 Controller Memory Buffer Support 00:28:05.137 ================================ 00:28:05.137 Supported: No 00:28:05.137 00:28:05.137 Persistent Memory Region Support 00:28:05.137 ================================ 00:28:05.137 Supported: No 00:28:05.137 00:28:05.137 Admin Command Set Attributes 00:28:05.137 ============================ 00:28:05.137 Security Send/Receive: Not Supported 00:28:05.137 Format NVM: Not Supported 00:28:05.137 Firmware Activate/Download: Not Supported 00:28:05.137 Namespace Management: Not Supported 00:28:05.137 Device Self-Test: Not Supported 00:28:05.137 Directives: Not Supported 00:28:05.137 NVMe-MI: Not Supported 00:28:05.137 Virtualization Management: Not Supported 00:28:05.137 Doorbell Buffer Config: Not Supported 00:28:05.137 Get LBA Status Capability: Not Supported 00:28:05.137 Command & Feature Lockdown Capability: Not Supported 00:28:05.137 Abort Command Limit: 1 00:28:05.137 Async Event Request Limit: 4 00:28:05.137 Number of Firmware Slots: N/A 00:28:05.137 Firmware Slot 1 Read-Only: N/A 00:28:05.137 Firmware Activation Without Reset: N/A 00:28:05.137 Multiple Update Detection Support: N/A 00:28:05.137 Firmware Update Granularity: No Information Provided 00:28:05.137 Per-Namespace SMART Log: No 00:28:05.137 Asymmetric Namespace Access Log Page: Not Supported 00:28:05.137 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:05.137 Command Effects Log Page: Not Supported 00:28:05.137 Get Log Page Extended Data: Supported 00:28:05.137 Telemetry Log Pages: Not Supported 00:28:05.137 Persistent Event Log Pages: Not Supported 00:28:05.137 Supported Log Pages Log Page: May Support 00:28:05.137 Commands Supported & Effects Log Page: Not Supported 00:28:05.137 Feature Identifiers & Effects Log Page:May Support 00:28:05.137 NVMe-MI Commands & Effects Log Page: May Support 00:28:05.137 Data Area 4 for Telemetry Log: Not Supported 00:28:05.137 Error Log Page Entries Supported: 128 00:28:05.137 Keep Alive: Not Supported 00:28:05.137 00:28:05.137 NVM Command Set Attributes 00:28:05.137 ========================== 00:28:05.137 Submission Queue Entry Size 00:28:05.137 Max: 1 00:28:05.137 Min: 1 00:28:05.137 Completion Queue Entry Size 00:28:05.137 Max: 1 00:28:05.137 Min: 1 00:28:05.137 Number of Namespaces: 0 00:28:05.137 Compare Command: Not Supported 00:28:05.137 Write Uncorrectable Command: Not Supported 00:28:05.137 Dataset Management Command: Not Supported 00:28:05.137 Write Zeroes Command: Not Supported 00:28:05.137 Set Features Save Field: Not Supported 00:28:05.137 Reservations: Not Supported 00:28:05.137 Timestamp: Not Supported 00:28:05.137 Copy: Not Supported 00:28:05.137 Volatile Write Cache: Not Present 00:28:05.137 Atomic Write Unit (Normal): 1 00:28:05.137 Atomic Write Unit (PFail): 1 00:28:05.137 Atomic Compare & Write Unit: 1 00:28:05.137 Fused Compare & Write: Supported 00:28:05.137 Scatter-Gather List 00:28:05.137 SGL Command Set: Supported 00:28:05.137 SGL Keyed: Supported 00:28:05.137 SGL Bit Bucket Descriptor: Not Supported 00:28:05.137 SGL Metadata Pointer: Not Supported 00:28:05.137 Oversized SGL: Not Supported 00:28:05.137 SGL Metadata Address: Not Supported 00:28:05.137 SGL Offset: Supported 00:28:05.137 Transport SGL Data Block: Not Supported 00:28:05.137 Replay Protected Memory Block: Not Supported 00:28:05.137 00:28:05.137 Firmware Slot Information 00:28:05.137 ========================= 00:28:05.137 Active slot: 0 00:28:05.137 00:28:05.137 00:28:05.137 Error Log 00:28:05.137 ========= 00:28:05.137 00:28:05.137 Active Namespaces 00:28:05.137 ================= 00:28:05.137 Discovery Log Page 00:28:05.137 ================== 00:28:05.137 Generation Counter: 2 00:28:05.137 Number of Records: 2 00:28:05.137 Record Format: 0 00:28:05.137 00:28:05.137 Discovery Log Entry 0 00:28:05.137 ---------------------- 00:28:05.137 Transport Type: 3 (TCP) 00:28:05.137 Address Family: 1 (IPv4) 00:28:05.137 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:05.137 Entry Flags: 00:28:05.137 Duplicate Returned Information: 1 00:28:05.137 Explicit Persistent Connection Support for Discovery: 1 00:28:05.137 Transport Requirements: 00:28:05.137 Secure Channel: Not Required 00:28:05.137 Port ID: 0 (0x0000) 00:28:05.137 Controller ID: 65535 (0xffff) 00:28:05.137 Admin Max SQ Size: 128 00:28:05.137 Transport Service Identifier: 4420 00:28:05.137 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:05.137 Transport Address: 10.0.0.2 00:28:05.137 Discovery Log Entry 1 00:28:05.137 ---------------------- 00:28:05.137 Transport Type: 3 (TCP) 00:28:05.137 Address Family: 1 (IPv4) 00:28:05.137 Subsystem Type: 2 (NVM Subsystem) 00:28:05.137 Entry Flags: 00:28:05.137 Duplicate Returned Information: 0 00:28:05.137 Explicit Persistent Connection Support for Discovery: 0 00:28:05.137 Transport Requirements: 00:28:05.137 Secure Channel: Not Required 00:28:05.137 Port ID: 0 (0x0000) 00:28:05.137 Controller ID: 65535 (0xffff) 00:28:05.137 Admin Max SQ Size: 128 00:28:05.137 Transport Service Identifier: 4420 00:28:05.137 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:05.137 Transport Address: 10.0.0.2 [2024-07-11 23:41:25.994457] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:05.137 [2024-07-11 23:41:25.994484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.137 [2024-07-11 23:41:25.994496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.137 [2024-07-11 23:41:25.994505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.137 [2024-07-11 23:41:25.994514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.137 [2024-07-11 23:41:25.994527] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.137 [2024-07-11 23:41:25.994535] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.137 [2024-07-11 23:41:25.994541] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15d45a0) 00:28:05.137 [2024-07-11 23:41:25.994552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.137 [2024-07-11 23:41:25.994576] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f800, cid 3, qid 0 00:28:05.137 [2024-07-11 23:41:25.994765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.137 [2024-07-11 23:41:25.994778] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.137 [2024-07-11 23:41:25.994785] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.137 [2024-07-11 23:41:25.994791] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f800) on tqpair=0x15d45a0 00:28:05.137 [2024-07-11 23:41:25.994803] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.137 [2024-07-11 23:41:25.994810] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.137 [2024-07-11 23:41:25.994816] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15d45a0) 00:28:05.137 [2024-07-11 23:41:25.994826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.137 [2024-07-11 23:41:25.994851] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f800, cid 3, qid 0 00:28:05.137 [2024-07-11 23:41:25.995026] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.137 [2024-07-11 23:41:25.995041] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.137 [2024-07-11 23:41:25.995047] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.137 [2024-07-11 23:41:25.995053] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f800) on tqpair=0x15d45a0 00:28:05.137 [2024-07-11 23:41:25.995074] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:05.137 [2024-07-11 23:41:25.995086] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:05.137 [2024-07-11 23:41:25.995103] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.137 [2024-07-11 23:41:25.995111] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.137 [2024-07-11 23:41:25.995117] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15d45a0) 00:28:05.137 [2024-07-11 23:41:25.995127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.137 [2024-07-11 23:41:25.999169] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163f800, cid 3, qid 0 00:28:05.137 [2024-07-11 23:41:25.999412] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.137 [2024-07-11 23:41:25.999428] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.137 [2024-07-11 23:41:25.999434] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.138 [2024-07-11 23:41:25.999441] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x163f800) on tqpair=0x15d45a0 00:28:05.138 [2024-07-11 23:41:25.999472] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:28:05.138 00:28:05.138 23:41:26 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:05.138 [2024-07-11 23:41:26.033832] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:05.138 [2024-07-11 23:41:26.033878] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344081 ] 00:28:05.138 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.138 [2024-07-11 23:41:26.068952] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:05.138 [2024-07-11 23:41:26.069003] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:05.138 [2024-07-11 23:41:26.069013] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:05.138 [2024-07-11 23:41:26.069026] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:05.138 [2024-07-11 23:41:26.069038] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:05.138 [2024-07-11 23:41:26.069414] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:05.138 [2024-07-11 23:41:26.069471] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1de45a0 0 00:28:05.138 [2024-07-11 23:41:26.080155] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:05.138 [2024-07-11 23:41:26.080175] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:05.138 [2024-07-11 23:41:26.080183] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:05.138 [2024-07-11 23:41:26.080189] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:05.138 [2024-07-11 23:41:26.080227] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.138 [2024-07-11 23:41:26.080238] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.138 [2024-07-11 23:41:26.080245] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1de45a0) 00:28:05.138 [2024-07-11 23:41:26.080259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:05.138 [2024-07-11 23:41:26.080286] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f3e0, cid 0, qid 0 00:28:05.401 [2024-07-11 23:41:26.088186] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.401 [2024-07-11 23:41:26.088205] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.401 [2024-07-11 23:41:26.088212] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.088219] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f3e0) on tqpair=0x1de45a0 00:28:05.401 [2024-07-11 23:41:26.088238] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:05.401 [2024-07-11 23:41:26.088248] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:05.401 [2024-07-11 23:41:26.088257] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:05.401 [2024-07-11 23:41:26.088273] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.088282] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.088288] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1de45a0) 00:28:05.401 [2024-07-11 23:41:26.088299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.401 [2024-07-11 23:41:26.088322] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f3e0, cid 0, qid 0 00:28:05.401 [2024-07-11 23:41:26.088513] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.401 [2024-07-11 23:41:26.088525] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.401 [2024-07-11 23:41:26.088532] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.088538] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f3e0) on tqpair=0x1de45a0 00:28:05.401 [2024-07-11 23:41:26.088547] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:05.401 [2024-07-11 23:41:26.088560] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:05.401 [2024-07-11 23:41:26.088571] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.088578] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.088584] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1de45a0) 00:28:05.401 [2024-07-11 23:41:26.088594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.401 [2024-07-11 23:41:26.088614] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f3e0, cid 0, qid 0 00:28:05.401 [2024-07-11 23:41:26.088792] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.401 [2024-07-11 23:41:26.088803] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.401 [2024-07-11 23:41:26.088810] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.088816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f3e0) on tqpair=0x1de45a0 00:28:05.401 [2024-07-11 23:41:26.088825] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:05.401 [2024-07-11 23:41:26.088839] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:05.401 [2024-07-11 23:41:26.088850] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.088857] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.088863] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1de45a0) 00:28:05.401 [2024-07-11 23:41:26.088873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.401 [2024-07-11 23:41:26.088893] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f3e0, cid 0, qid 0 00:28:05.401 [2024-07-11 23:41:26.089060] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.401 [2024-07-11 23:41:26.089075] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.401 [2024-07-11 23:41:26.089081] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.089087] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f3e0) on tqpair=0x1de45a0 00:28:05.401 [2024-07-11 23:41:26.089097] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:05.401 [2024-07-11 23:41:26.089114] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.089137] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.089154] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1de45a0) 00:28:05.401 [2024-07-11 23:41:26.089165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.401 [2024-07-11 23:41:26.089199] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f3e0, cid 0, qid 0 00:28:05.401 [2024-07-11 23:41:26.089370] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.401 [2024-07-11 23:41:26.089385] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.401 [2024-07-11 23:41:26.089392] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.089399] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f3e0) on tqpair=0x1de45a0 00:28:05.401 [2024-07-11 23:41:26.089407] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:05.401 [2024-07-11 23:41:26.089416] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:05.401 [2024-07-11 23:41:26.089444] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:05.401 [2024-07-11 23:41:26.089555] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:05.401 [2024-07-11 23:41:26.089562] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:05.401 [2024-07-11 23:41:26.089573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.089581] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.089587] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1de45a0) 00:28:05.401 [2024-07-11 23:41:26.089597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.401 [2024-07-11 23:41:26.089617] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f3e0, cid 0, qid 0 00:28:05.401 [2024-07-11 23:41:26.089782] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.401 [2024-07-11 23:41:26.089797] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.401 [2024-07-11 23:41:26.089803] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.089809] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f3e0) on tqpair=0x1de45a0 00:28:05.401 [2024-07-11 23:41:26.089818] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:05.401 [2024-07-11 23:41:26.089835] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.089843] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.401 [2024-07-11 23:41:26.089849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1de45a0) 00:28:05.401 [2024-07-11 23:41:26.089859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.401 [2024-07-11 23:41:26.089879] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f3e0, cid 0, qid 0 00:28:05.401 [2024-07-11 23:41:26.090035] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.401 [2024-07-11 23:41:26.090047] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.401 [2024-07-11 23:41:26.090054] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090060] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f3e0) on tqpair=0x1de45a0 00:28:05.402 [2024-07-11 23:41:26.090068] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:05.402 [2024-07-11 23:41:26.090076] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:05.402 [2024-07-11 23:41:26.090089] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:05.402 [2024-07-11 23:41:26.090105] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:05.402 [2024-07-11 23:41:26.090133] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090149] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090156] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1de45a0) 00:28:05.402 [2024-07-11 23:41:26.090167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.402 [2024-07-11 23:41:26.090189] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f3e0, cid 0, qid 0 00:28:05.402 [2024-07-11 23:41:26.090395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.402 [2024-07-11 23:41:26.090410] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.402 [2024-07-11 23:41:26.090431] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090438] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1de45a0): datao=0, datal=4096, cccid=0 00:28:05.402 [2024-07-11 23:41:26.090445] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e4f3e0) on tqpair(0x1de45a0): expected_datao=0, payload_size=4096 00:28:05.402 [2024-07-11 23:41:26.090475] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090483] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090577] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.402 [2024-07-11 23:41:26.090591] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.402 [2024-07-11 23:41:26.090598] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090604] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f3e0) on tqpair=0x1de45a0 00:28:05.402 [2024-07-11 23:41:26.090615] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:05.402 [2024-07-11 23:41:26.090624] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:05.402 [2024-07-11 23:41:26.090631] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:05.402 [2024-07-11 23:41:26.090637] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:05.402 [2024-07-11 23:41:26.090644] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:05.402 [2024-07-11 23:41:26.090651] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:05.402 [2024-07-11 23:41:26.090669] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:05.402 [2024-07-11 23:41:26.090682] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090692] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090699] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1de45a0) 00:28:05.402 [2024-07-11 23:41:26.090709] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:05.402 [2024-07-11 23:41:26.090730] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f3e0, cid 0, qid 0 00:28:05.402 [2024-07-11 23:41:26.090886] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.402 [2024-07-11 23:41:26.090898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.402 [2024-07-11 23:41:26.090904] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090910] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f3e0) on tqpair=0x1de45a0 00:28:05.402 [2024-07-11 23:41:26.090921] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090928] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090934] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1de45a0) 00:28:05.402 [2024-07-11 23:41:26.090943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.402 [2024-07-11 23:41:26.090952] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090959] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090964] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1de45a0) 00:28:05.402 [2024-07-11 23:41:26.090973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.402 [2024-07-11 23:41:26.090981] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090988] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.090993] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1de45a0) 00:28:05.402 [2024-07-11 23:41:26.091001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.402 [2024-07-11 23:41:26.091010] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.091016] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.091022] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1de45a0) 00:28:05.402 [2024-07-11 23:41:26.091030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.402 [2024-07-11 23:41:26.091038] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:05.402 [2024-07-11 23:41:26.091055] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:05.402 [2024-07-11 23:41:26.091067] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.091074] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.091080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1de45a0) 00:28:05.402 [2024-07-11 23:41:26.091090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.402 [2024-07-11 23:41:26.091111] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f3e0, cid 0, qid 0 00:28:05.402 [2024-07-11 23:41:26.091137] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f540, cid 1, qid 0 00:28:05.402 [2024-07-11 23:41:26.091154] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f6a0, cid 2, qid 0 00:28:05.402 [2024-07-11 23:41:26.091162] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f800, cid 3, qid 0 00:28:05.402 [2024-07-11 23:41:26.091173] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f960, cid 4, qid 0 00:28:05.402 [2024-07-11 23:41:26.091351] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.402 [2024-07-11 23:41:26.091366] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.402 [2024-07-11 23:41:26.091372] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.091379] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f960) on tqpair=0x1de45a0 00:28:05.402 [2024-07-11 23:41:26.091388] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:05.402 [2024-07-11 23:41:26.091397] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:05.402 [2024-07-11 23:41:26.091411] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:05.402 [2024-07-11 23:41:26.091427] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:05.402 [2024-07-11 23:41:26.091438] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.091461] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.091467] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1de45a0) 00:28:05.402 [2024-07-11 23:41:26.091477] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:05.402 [2024-07-11 23:41:26.091498] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f960, cid 4, qid 0 00:28:05.402 [2024-07-11 23:41:26.091670] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.402 [2024-07-11 23:41:26.091685] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.402 [2024-07-11 23:41:26.091691] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.091697] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f960) on tqpair=0x1de45a0 00:28:05.402 [2024-07-11 23:41:26.091759] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:05.402 [2024-07-11 23:41:26.091777] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:05.402 [2024-07-11 23:41:26.091791] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.091798] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.091804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1de45a0) 00:28:05.402 [2024-07-11 23:41:26.091814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.402 [2024-07-11 23:41:26.091834] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f960, cid 4, qid 0 00:28:05.402 [2024-07-11 23:41:26.092016] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.402 [2024-07-11 23:41:26.092030] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.402 [2024-07-11 23:41:26.092037] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.092043] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1de45a0): datao=0, datal=4096, cccid=4 00:28:05.402 [2024-07-11 23:41:26.092050] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e4f960) on tqpair(0x1de45a0): expected_datao=0, payload_size=4096 00:28:05.402 [2024-07-11 23:41:26.092096] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.092105] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.096154] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.402 [2024-07-11 23:41:26.096174] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.402 [2024-07-11 23:41:26.096182] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.096188] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f960) on tqpair=0x1de45a0 00:28:05.402 [2024-07-11 23:41:26.096211] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:05.402 [2024-07-11 23:41:26.096230] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:05.402 [2024-07-11 23:41:26.096248] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:05.402 [2024-07-11 23:41:26.096260] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.402 [2024-07-11 23:41:26.096268] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.096274] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1de45a0) 00:28:05.403 [2024-07-11 23:41:26.096284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.403 [2024-07-11 23:41:26.096306] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f960, cid 4, qid 0 00:28:05.403 [2024-07-11 23:41:26.096621] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.403 [2024-07-11 23:41:26.096636] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.403 [2024-07-11 23:41:26.096643] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.096648] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1de45a0): datao=0, datal=4096, cccid=4 00:28:05.403 [2024-07-11 23:41:26.096656] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e4f960) on tqpair(0x1de45a0): expected_datao=0, payload_size=4096 00:28:05.403 [2024-07-11 23:41:26.096666] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.096673] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.137362] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.403 [2024-07-11 23:41:26.137381] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.403 [2024-07-11 23:41:26.137388] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.137395] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f960) on tqpair=0x1de45a0 00:28:05.403 [2024-07-11 23:41:26.137420] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:05.403 [2024-07-11 23:41:26.137456] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:05.403 [2024-07-11 23:41:26.137471] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.137478] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.137485] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1de45a0) 00:28:05.403 [2024-07-11 23:41:26.137495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.403 [2024-07-11 23:41:26.137532] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f960, cid 4, qid 0 00:28:05.403 [2024-07-11 23:41:26.137695] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.403 [2024-07-11 23:41:26.137710] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.403 [2024-07-11 23:41:26.137717] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.137723] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1de45a0): datao=0, datal=4096, cccid=4 00:28:05.403 [2024-07-11 23:41:26.137730] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e4f960) on tqpair(0x1de45a0): expected_datao=0, payload_size=4096 00:28:05.403 [2024-07-11 23:41:26.137771] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.137781] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.178324] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.403 [2024-07-11 23:41:26.178343] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.403 [2024-07-11 23:41:26.178351] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.178357] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f960) on tqpair=0x1de45a0 00:28:05.403 [2024-07-11 23:41:26.178373] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:05.403 [2024-07-11 23:41:26.178389] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:05.403 [2024-07-11 23:41:26.178406] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:05.403 [2024-07-11 23:41:26.178417] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:05.403 [2024-07-11 23:41:26.178426] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:05.403 [2024-07-11 23:41:26.178435] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:05.403 [2024-07-11 23:41:26.178443] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:05.403 [2024-07-11 23:41:26.178466] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:05.403 [2024-07-11 23:41:26.178485] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.178494] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.178500] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1de45a0) 00:28:05.403 [2024-07-11 23:41:26.178525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.403 [2024-07-11 23:41:26.178537] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.178544] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.178550] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1de45a0) 00:28:05.403 [2024-07-11 23:41:26.178558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.403 [2024-07-11 23:41:26.178583] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f960, cid 4, qid 0 00:28:05.403 [2024-07-11 23:41:26.178595] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4fac0, cid 5, qid 0 00:28:05.403 [2024-07-11 23:41:26.178763] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.403 [2024-07-11 23:41:26.178778] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.403 [2024-07-11 23:41:26.178784] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.178791] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f960) on tqpair=0x1de45a0 00:28:05.403 [2024-07-11 23:41:26.178802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.403 [2024-07-11 23:41:26.178811] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.403 [2024-07-11 23:41:26.178817] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.178823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4fac0) on tqpair=0x1de45a0 00:28:05.403 [2024-07-11 23:41:26.178840] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.178848] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.178858] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1de45a0) 00:28:05.403 [2024-07-11 23:41:26.178869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.403 [2024-07-11 23:41:26.178889] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4fac0, cid 5, qid 0 00:28:05.403 [2024-07-11 23:41:26.179108] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.403 [2024-07-11 23:41:26.179134] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.403 [2024-07-11 23:41:26.179152] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179159] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4fac0) on tqpair=0x1de45a0 00:28:05.403 [2024-07-11 23:41:26.179177] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179185] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179192] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1de45a0) 00:28:05.403 [2024-07-11 23:41:26.179202] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.403 [2024-07-11 23:41:26.179223] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4fac0, cid 5, qid 0 00:28:05.403 [2024-07-11 23:41:26.179452] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.403 [2024-07-11 23:41:26.179479] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.403 [2024-07-11 23:41:26.179486] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179492] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4fac0) on tqpair=0x1de45a0 00:28:05.403 [2024-07-11 23:41:26.179508] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179517] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179523] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1de45a0) 00:28:05.403 [2024-07-11 23:41:26.179533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.403 [2024-07-11 23:41:26.179553] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4fac0, cid 5, qid 0 00:28:05.403 [2024-07-11 23:41:26.179783] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.403 [2024-07-11 23:41:26.179795] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.403 [2024-07-11 23:41:26.179801] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179807] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4fac0) on tqpair=0x1de45a0 00:28:05.403 [2024-07-11 23:41:26.179825] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179835] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179841] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1de45a0) 00:28:05.403 [2024-07-11 23:41:26.179851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.403 [2024-07-11 23:41:26.179862] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179868] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179874] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1de45a0) 00:28:05.403 [2024-07-11 23:41:26.179883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.403 [2024-07-11 23:41:26.179894] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179901] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179910] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1de45a0) 00:28:05.403 [2024-07-11 23:41:26.179920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.403 [2024-07-11 23:41:26.179931] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179938] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.403 [2024-07-11 23:41:26.179944] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1de45a0) 00:28:05.403 [2024-07-11 23:41:26.179953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.403 [2024-07-11 23:41:26.179974] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4fac0, cid 5, qid 0 00:28:05.403 [2024-07-11 23:41:26.179984] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f960, cid 4, qid 0 00:28:05.403 [2024-07-11 23:41:26.179992] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4fc20, cid 6, qid 0 00:28:05.403 [2024-07-11 23:41:26.179999] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4fd80, cid 7, qid 0 00:28:05.403 [2024-07-11 23:41:26.184171] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.403 [2024-07-11 23:41:26.184188] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.404 [2024-07-11 23:41:26.184195] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184201] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1de45a0): datao=0, datal=8192, cccid=5 00:28:05.404 [2024-07-11 23:41:26.184209] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e4fac0) on tqpair(0x1de45a0): expected_datao=0, payload_size=8192 00:28:05.404 [2024-07-11 23:41:26.184220] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184228] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184236] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.404 [2024-07-11 23:41:26.184245] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.404 [2024-07-11 23:41:26.184251] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184257] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1de45a0): datao=0, datal=512, cccid=4 00:28:05.404 [2024-07-11 23:41:26.184264] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e4f960) on tqpair(0x1de45a0): expected_datao=0, payload_size=512 00:28:05.404 [2024-07-11 23:41:26.184274] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184281] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184289] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.404 [2024-07-11 23:41:26.184298] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.404 [2024-07-11 23:41:26.184304] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184310] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1de45a0): datao=0, datal=512, cccid=6 00:28:05.404 [2024-07-11 23:41:26.184317] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e4fc20) on tqpair(0x1de45a0): expected_datao=0, payload_size=512 00:28:05.404 [2024-07-11 23:41:26.184327] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184334] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184342] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.404 [2024-07-11 23:41:26.184351] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.404 [2024-07-11 23:41:26.184357] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184363] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1de45a0): datao=0, datal=4096, cccid=7 00:28:05.404 [2024-07-11 23:41:26.184374] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e4fd80) on tqpair(0x1de45a0): expected_datao=0, payload_size=4096 00:28:05.404 [2024-07-11 23:41:26.184385] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184393] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184401] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.404 [2024-07-11 23:41:26.184409] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.404 [2024-07-11 23:41:26.184415] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184422] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4fac0) on tqpair=0x1de45a0 00:28:05.404 [2024-07-11 23:41:26.184457] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.404 [2024-07-11 23:41:26.184469] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.404 [2024-07-11 23:41:26.184475] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184481] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f960) on tqpair=0x1de45a0 00:28:05.404 [2024-07-11 23:41:26.184496] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.404 [2024-07-11 23:41:26.184506] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.404 [2024-07-11 23:41:26.184513] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184519] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4fc20) on tqpair=0x1de45a0 00:28:05.404 [2024-07-11 23:41:26.184530] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.404 [2024-07-11 23:41:26.184539] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.404 [2024-07-11 23:41:26.184545] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.404 [2024-07-11 23:41:26.184551] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4fd80) on tqpair=0x1de45a0 00:28:05.404 ===================================================== 00:28:05.404 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:05.404 ===================================================== 00:28:05.404 Controller Capabilities/Features 00:28:05.404 ================================ 00:28:05.404 Vendor ID: 8086 00:28:05.404 Subsystem Vendor ID: 8086 00:28:05.404 Serial Number: SPDK00000000000001 00:28:05.404 Model Number: SPDK bdev Controller 00:28:05.404 Firmware Version: 24.01.1 00:28:05.404 Recommended Arb Burst: 6 00:28:05.404 IEEE OUI Identifier: e4 d2 5c 00:28:05.404 Multi-path I/O 00:28:05.404 May have multiple subsystem ports: Yes 00:28:05.404 May have multiple controllers: Yes 00:28:05.404 Associated with SR-IOV VF: No 00:28:05.404 Max Data Transfer Size: 131072 00:28:05.404 Max Number of Namespaces: 32 00:28:05.404 Max Number of I/O Queues: 127 00:28:05.404 NVMe Specification Version (VS): 1.3 00:28:05.404 NVMe Specification Version (Identify): 1.3 00:28:05.404 Maximum Queue Entries: 128 00:28:05.404 Contiguous Queues Required: Yes 00:28:05.404 Arbitration Mechanisms Supported 00:28:05.404 Weighted Round Robin: Not Supported 00:28:05.404 Vendor Specific: Not Supported 00:28:05.404 Reset Timeout: 15000 ms 00:28:05.404 Doorbell Stride: 4 bytes 00:28:05.404 NVM Subsystem Reset: Not Supported 00:28:05.404 Command Sets Supported 00:28:05.404 NVM Command Set: Supported 00:28:05.404 Boot Partition: Not Supported 00:28:05.404 Memory Page Size Minimum: 4096 bytes 00:28:05.404 Memory Page Size Maximum: 4096 bytes 00:28:05.404 Persistent Memory Region: Not Supported 00:28:05.404 Optional Asynchronous Events Supported 00:28:05.404 Namespace Attribute Notices: Supported 00:28:05.404 Firmware Activation Notices: Not Supported 00:28:05.404 ANA Change Notices: Not Supported 00:28:05.404 PLE Aggregate Log Change Notices: Not Supported 00:28:05.404 LBA Status Info Alert Notices: Not Supported 00:28:05.404 EGE Aggregate Log Change Notices: Not Supported 00:28:05.404 Normal NVM Subsystem Shutdown event: Not Supported 00:28:05.404 Zone Descriptor Change Notices: Not Supported 00:28:05.404 Discovery Log Change Notices: Not Supported 00:28:05.404 Controller Attributes 00:28:05.404 128-bit Host Identifier: Supported 00:28:05.404 Non-Operational Permissive Mode: Not Supported 00:28:05.404 NVM Sets: Not Supported 00:28:05.404 Read Recovery Levels: Not Supported 00:28:05.404 Endurance Groups: Not Supported 00:28:05.404 Predictable Latency Mode: Not Supported 00:28:05.404 Traffic Based Keep ALive: Not Supported 00:28:05.404 Namespace Granularity: Not Supported 00:28:05.404 SQ Associations: Not Supported 00:28:05.404 UUID List: Not Supported 00:28:05.404 Multi-Domain Subsystem: Not Supported 00:28:05.404 Fixed Capacity Management: Not Supported 00:28:05.404 Variable Capacity Management: Not Supported 00:28:05.404 Delete Endurance Group: Not Supported 00:28:05.404 Delete NVM Set: Not Supported 00:28:05.404 Extended LBA Formats Supported: Not Supported 00:28:05.404 Flexible Data Placement Supported: Not Supported 00:28:05.404 00:28:05.404 Controller Memory Buffer Support 00:28:05.404 ================================ 00:28:05.404 Supported: No 00:28:05.404 00:28:05.404 Persistent Memory Region Support 00:28:05.404 ================================ 00:28:05.404 Supported: No 00:28:05.404 00:28:05.404 Admin Command Set Attributes 00:28:05.404 ============================ 00:28:05.404 Security Send/Receive: Not Supported 00:28:05.404 Format NVM: Not Supported 00:28:05.404 Firmware Activate/Download: Not Supported 00:28:05.404 Namespace Management: Not Supported 00:28:05.404 Device Self-Test: Not Supported 00:28:05.404 Directives: Not Supported 00:28:05.404 NVMe-MI: Not Supported 00:28:05.404 Virtualization Management: Not Supported 00:28:05.404 Doorbell Buffer Config: Not Supported 00:28:05.404 Get LBA Status Capability: Not Supported 00:28:05.404 Command & Feature Lockdown Capability: Not Supported 00:28:05.404 Abort Command Limit: 4 00:28:05.404 Async Event Request Limit: 4 00:28:05.404 Number of Firmware Slots: N/A 00:28:05.404 Firmware Slot 1 Read-Only: N/A 00:28:05.404 Firmware Activation Without Reset: N/A 00:28:05.404 Multiple Update Detection Support: N/A 00:28:05.404 Firmware Update Granularity: No Information Provided 00:28:05.404 Per-Namespace SMART Log: No 00:28:05.404 Asymmetric Namespace Access Log Page: Not Supported 00:28:05.404 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:05.404 Command Effects Log Page: Supported 00:28:05.404 Get Log Page Extended Data: Supported 00:28:05.404 Telemetry Log Pages: Not Supported 00:28:05.404 Persistent Event Log Pages: Not Supported 00:28:05.404 Supported Log Pages Log Page: May Support 00:28:05.404 Commands Supported & Effects Log Page: Not Supported 00:28:05.404 Feature Identifiers & Effects Log Page:May Support 00:28:05.404 NVMe-MI Commands & Effects Log Page: May Support 00:28:05.404 Data Area 4 for Telemetry Log: Not Supported 00:28:05.404 Error Log Page Entries Supported: 128 00:28:05.404 Keep Alive: Supported 00:28:05.404 Keep Alive Granularity: 10000 ms 00:28:05.404 00:28:05.404 NVM Command Set Attributes 00:28:05.404 ========================== 00:28:05.404 Submission Queue Entry Size 00:28:05.404 Max: 64 00:28:05.404 Min: 64 00:28:05.404 Completion Queue Entry Size 00:28:05.404 Max: 16 00:28:05.404 Min: 16 00:28:05.404 Number of Namespaces: 32 00:28:05.404 Compare Command: Supported 00:28:05.404 Write Uncorrectable Command: Not Supported 00:28:05.404 Dataset Management Command: Supported 00:28:05.404 Write Zeroes Command: Supported 00:28:05.404 Set Features Save Field: Not Supported 00:28:05.404 Reservations: Supported 00:28:05.404 Timestamp: Not Supported 00:28:05.404 Copy: Supported 00:28:05.404 Volatile Write Cache: Present 00:28:05.404 Atomic Write Unit (Normal): 1 00:28:05.404 Atomic Write Unit (PFail): 1 00:28:05.405 Atomic Compare & Write Unit: 1 00:28:05.405 Fused Compare & Write: Supported 00:28:05.405 Scatter-Gather List 00:28:05.405 SGL Command Set: Supported 00:28:05.405 SGL Keyed: Supported 00:28:05.405 SGL Bit Bucket Descriptor: Not Supported 00:28:05.405 SGL Metadata Pointer: Not Supported 00:28:05.405 Oversized SGL: Not Supported 00:28:05.405 SGL Metadata Address: Not Supported 00:28:05.405 SGL Offset: Supported 00:28:05.405 Transport SGL Data Block: Not Supported 00:28:05.405 Replay Protected Memory Block: Not Supported 00:28:05.405 00:28:05.405 Firmware Slot Information 00:28:05.405 ========================= 00:28:05.405 Active slot: 1 00:28:05.405 Slot 1 Firmware Revision: 24.01.1 00:28:05.405 00:28:05.405 00:28:05.405 Commands Supported and Effects 00:28:05.405 ============================== 00:28:05.405 Admin Commands 00:28:05.405 -------------- 00:28:05.405 Get Log Page (02h): Supported 00:28:05.405 Identify (06h): Supported 00:28:05.405 Abort (08h): Supported 00:28:05.405 Set Features (09h): Supported 00:28:05.405 Get Features (0Ah): Supported 00:28:05.405 Asynchronous Event Request (0Ch): Supported 00:28:05.405 Keep Alive (18h): Supported 00:28:05.405 I/O Commands 00:28:05.405 ------------ 00:28:05.405 Flush (00h): Supported LBA-Change 00:28:05.405 Write (01h): Supported LBA-Change 00:28:05.405 Read (02h): Supported 00:28:05.405 Compare (05h): Supported 00:28:05.405 Write Zeroes (08h): Supported LBA-Change 00:28:05.405 Dataset Management (09h): Supported LBA-Change 00:28:05.405 Copy (19h): Supported LBA-Change 00:28:05.405 Unknown (79h): Supported LBA-Change 00:28:05.405 Unknown (7Ah): Supported 00:28:05.405 00:28:05.405 Error Log 00:28:05.405 ========= 00:28:05.405 00:28:05.405 Arbitration 00:28:05.405 =========== 00:28:05.405 Arbitration Burst: 1 00:28:05.405 00:28:05.405 Power Management 00:28:05.405 ================ 00:28:05.405 Number of Power States: 1 00:28:05.405 Current Power State: Power State #0 00:28:05.405 Power State #0: 00:28:05.405 Max Power: 0.00 W 00:28:05.405 Non-Operational State: Operational 00:28:05.405 Entry Latency: Not Reported 00:28:05.405 Exit Latency: Not Reported 00:28:05.405 Relative Read Throughput: 0 00:28:05.405 Relative Read Latency: 0 00:28:05.405 Relative Write Throughput: 0 00:28:05.405 Relative Write Latency: 0 00:28:05.405 Idle Power: Not Reported 00:28:05.405 Active Power: Not Reported 00:28:05.405 Non-Operational Permissive Mode: Not Supported 00:28:05.405 00:28:05.405 Health Information 00:28:05.405 ================== 00:28:05.405 Critical Warnings: 00:28:05.405 Available Spare Space: OK 00:28:05.405 Temperature: OK 00:28:05.405 Device Reliability: OK 00:28:05.405 Read Only: No 00:28:05.405 Volatile Memory Backup: OK 00:28:05.405 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:05.405 Temperature Threshold: [2024-07-11 23:41:26.184670] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.184682] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.184689] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1de45a0) 00:28:05.405 [2024-07-11 23:41:26.184699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.405 [2024-07-11 23:41:26.184722] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4fd80, cid 7, qid 0 00:28:05.405 [2024-07-11 23:41:26.184968] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.405 [2024-07-11 23:41:26.184983] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.405 [2024-07-11 23:41:26.184989] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.184996] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4fd80) on tqpair=0x1de45a0 00:28:05.405 [2024-07-11 23:41:26.185037] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:05.405 [2024-07-11 23:41:26.185057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.405 [2024-07-11 23:41:26.185069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.405 [2024-07-11 23:41:26.185078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.405 [2024-07-11 23:41:26.185087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.405 [2024-07-11 23:41:26.185099] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.185107] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.185113] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1de45a0) 00:28:05.405 [2024-07-11 23:41:26.185127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.405 [2024-07-11 23:41:26.185171] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f800, cid 3, qid 0 00:28:05.405 [2024-07-11 23:41:26.185393] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.405 [2024-07-11 23:41:26.185408] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.405 [2024-07-11 23:41:26.185414] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.185421] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f800) on tqpair=0x1de45a0 00:28:05.405 [2024-07-11 23:41:26.185433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.185441] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.185447] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1de45a0) 00:28:05.405 [2024-07-11 23:41:26.185473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.405 [2024-07-11 23:41:26.185499] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f800, cid 3, qid 0 00:28:05.405 [2024-07-11 23:41:26.185718] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.405 [2024-07-11 23:41:26.185733] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.405 [2024-07-11 23:41:26.185739] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.185745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f800) on tqpair=0x1de45a0 00:28:05.405 [2024-07-11 23:41:26.185754] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:05.405 [2024-07-11 23:41:26.185762] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:05.405 [2024-07-11 23:41:26.185778] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.185786] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.185792] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1de45a0) 00:28:05.405 [2024-07-11 23:41:26.185802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.405 [2024-07-11 23:41:26.185822] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f800, cid 3, qid 0 00:28:05.405 [2024-07-11 23:41:26.186015] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.405 [2024-07-11 23:41:26.186029] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.405 [2024-07-11 23:41:26.186036] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.186042] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f800) on tqpair=0x1de45a0 00:28:05.405 [2024-07-11 23:41:26.186059] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.186067] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.186073] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1de45a0) 00:28:05.405 [2024-07-11 23:41:26.186083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.405 [2024-07-11 23:41:26.186103] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f800, cid 3, qid 0 00:28:05.405 [2024-07-11 23:41:26.186282] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.405 [2024-07-11 23:41:26.186298] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.405 [2024-07-11 23:41:26.186305] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.186311] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f800) on tqpair=0x1de45a0 00:28:05.405 [2024-07-11 23:41:26.186330] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.186339] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.405 [2024-07-11 23:41:26.186349] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1de45a0) 00:28:05.406 [2024-07-11 23:41:26.186361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.406 [2024-07-11 23:41:26.186382] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f800, cid 3, qid 0 00:28:05.406 [2024-07-11 23:41:26.186579] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.406 [2024-07-11 23:41:26.186594] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.406 [2024-07-11 23:41:26.186601] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.186607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f800) on tqpair=0x1de45a0 00:28:05.406 [2024-07-11 23:41:26.186624] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.186633] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.186639] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1de45a0) 00:28:05.406 [2024-07-11 23:41:26.186649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.406 [2024-07-11 23:41:26.186669] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f800, cid 3, qid 0 00:28:05.406 [2024-07-11 23:41:26.186891] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.406 [2024-07-11 23:41:26.186902] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.406 [2024-07-11 23:41:26.186909] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.186915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f800) on tqpair=0x1de45a0 00:28:05.406 [2024-07-11 23:41:26.186931] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.186940] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.186946] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1de45a0) 00:28:05.406 [2024-07-11 23:41:26.186956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.406 [2024-07-11 23:41:26.186975] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f800, cid 3, qid 0 00:28:05.406 [2024-07-11 23:41:26.187115] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.406 [2024-07-11 23:41:26.187129] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.406 [2024-07-11 23:41:26.187135] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.187166] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f800) on tqpair=0x1de45a0 00:28:05.406 [2024-07-11 23:41:26.187186] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.187194] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.187201] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1de45a0) 00:28:05.406 [2024-07-11 23:41:26.187211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.406 [2024-07-11 23:41:26.187232] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f800, cid 3, qid 0 00:28:05.406 [2024-07-11 23:41:26.187417] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.406 [2024-07-11 23:41:26.187428] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.406 [2024-07-11 23:41:26.187435] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.187457] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f800) on tqpair=0x1de45a0 00:28:05.406 [2024-07-11 23:41:26.187474] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.187483] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.187492] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1de45a0) 00:28:05.406 [2024-07-11 23:41:26.187503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.406 [2024-07-11 23:41:26.187522] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f800, cid 3, qid 0 00:28:05.406 [2024-07-11 23:41:26.187733] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.406 [2024-07-11 23:41:26.187747] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.406 [2024-07-11 23:41:26.187754] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.187760] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f800) on tqpair=0x1de45a0 00:28:05.406 [2024-07-11 23:41:26.187777] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.187785] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.187791] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1de45a0) 00:28:05.406 [2024-07-11 23:41:26.187801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.406 [2024-07-11 23:41:26.187821] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f800, cid 3, qid 0 00:28:05.406 [2024-07-11 23:41:26.187960] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.406 [2024-07-11 23:41:26.187974] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.406 [2024-07-11 23:41:26.187980] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.187986] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f800) on tqpair=0x1de45a0 00:28:05.406 [2024-07-11 23:41:26.188004] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.188012] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.188018] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1de45a0) 00:28:05.406 [2024-07-11 23:41:26.188028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.406 [2024-07-11 23:41:26.188048] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f800, cid 3, qid 0 00:28:05.406 [2024-07-11 23:41:26.192170] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.406 [2024-07-11 23:41:26.192186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.406 [2024-07-11 23:41:26.192193] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.192200] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f800) on tqpair=0x1de45a0 00:28:05.406 [2024-07-11 23:41:26.192218] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.192227] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.192234] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1de45a0) 00:28:05.406 [2024-07-11 23:41:26.192244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.406 [2024-07-11 23:41:26.192266] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e4f800, cid 3, qid 0 00:28:05.406 [2024-07-11 23:41:26.192478] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.406 [2024-07-11 23:41:26.192493] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.406 [2024-07-11 23:41:26.192499] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.406 [2024-07-11 23:41:26.192506] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e4f800) on tqpair=0x1de45a0 00:28:05.406 [2024-07-11 23:41:26.192521] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:28:05.406 0 Kelvin (-273 Celsius) 00:28:05.406 Available Spare: 0% 00:28:05.406 Available Spare Threshold: 0% 00:28:05.406 Life Percentage Used: 0% 00:28:05.406 Data Units Read: 0 00:28:05.406 Data Units Written: 0 00:28:05.406 Host Read Commands: 0 00:28:05.406 Host Write Commands: 0 00:28:05.406 Controller Busy Time: 0 minutes 00:28:05.406 Power Cycles: 0 00:28:05.406 Power On Hours: 0 hours 00:28:05.406 Unsafe Shutdowns: 0 00:28:05.406 Unrecoverable Media Errors: 0 00:28:05.406 Lifetime Error Log Entries: 0 00:28:05.406 Warning Temperature Time: 0 minutes 00:28:05.406 Critical Temperature Time: 0 minutes 00:28:05.406 00:28:05.406 Number of Queues 00:28:05.406 ================ 00:28:05.406 Number of I/O Submission Queues: 127 00:28:05.406 Number of I/O Completion Queues: 127 00:28:05.406 00:28:05.406 Active Namespaces 00:28:05.406 ================= 00:28:05.406 Namespace ID:1 00:28:05.406 Error Recovery Timeout: Unlimited 00:28:05.406 Command Set Identifier: NVM (00h) 00:28:05.406 Deallocate: Supported 00:28:05.406 Deallocated/Unwritten Error: Not Supported 00:28:05.406 Deallocated Read Value: Unknown 00:28:05.406 Deallocate in Write Zeroes: Not Supported 00:28:05.406 Deallocated Guard Field: 0xFFFF 00:28:05.406 Flush: Supported 00:28:05.406 Reservation: Supported 00:28:05.406 Namespace Sharing Capabilities: Multiple Controllers 00:28:05.406 Size (in LBAs): 131072 (0GiB) 00:28:05.406 Capacity (in LBAs): 131072 (0GiB) 00:28:05.406 Utilization (in LBAs): 131072 (0GiB) 00:28:05.406 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:05.406 EUI64: ABCDEF0123456789 00:28:05.406 UUID: 1fdd8f7f-704a-4a81-95f3-2991f9c309c7 00:28:05.406 Thin Provisioning: Not Supported 00:28:05.406 Per-NS Atomic Units: Yes 00:28:05.406 Atomic Boundary Size (Normal): 0 00:28:05.406 Atomic Boundary Size (PFail): 0 00:28:05.406 Atomic Boundary Offset: 0 00:28:05.406 Maximum Single Source Range Length: 65535 00:28:05.406 Maximum Copy Length: 65535 00:28:05.406 Maximum Source Range Count: 1 00:28:05.406 NGUID/EUI64 Never Reused: No 00:28:05.406 Namespace Write Protected: No 00:28:05.406 Number of LBA Formats: 1 00:28:05.406 Current LBA Format: LBA Format #00 00:28:05.406 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:05.406 00:28:05.406 23:41:26 -- host/identify.sh@51 -- # sync 00:28:05.406 23:41:26 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.406 23:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.406 23:41:26 -- common/autotest_common.sh@10 -- # set +x 00:28:05.406 23:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.406 23:41:26 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:05.406 23:41:26 -- host/identify.sh@56 -- # nvmftestfini 00:28:05.406 23:41:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:05.406 23:41:26 -- nvmf/common.sh@116 -- # sync 00:28:05.406 23:41:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:05.406 23:41:26 -- nvmf/common.sh@119 -- # set +e 00:28:05.406 23:41:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:05.406 23:41:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:05.406 rmmod nvme_tcp 00:28:05.406 rmmod nvme_fabrics 00:28:05.406 rmmod nvme_keyring 00:28:05.406 23:41:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:05.406 23:41:26 -- nvmf/common.sh@123 -- # set -e 00:28:05.406 23:41:26 -- nvmf/common.sh@124 -- # return 0 00:28:05.406 23:41:26 -- nvmf/common.sh@477 -- # '[' -n 343797 ']' 00:28:05.406 23:41:26 -- nvmf/common.sh@478 -- # killprocess 343797 00:28:05.406 23:41:26 -- common/autotest_common.sh@926 -- # '[' -z 343797 ']' 00:28:05.406 23:41:26 -- common/autotest_common.sh@930 -- # kill -0 343797 00:28:05.406 23:41:26 -- common/autotest_common.sh@931 -- # uname 00:28:05.407 23:41:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:05.407 23:41:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 343797 00:28:05.407 23:41:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:05.407 23:41:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:05.407 23:41:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 343797' 00:28:05.407 killing process with pid 343797 00:28:05.407 23:41:26 -- common/autotest_common.sh@945 -- # kill 343797 00:28:05.407 [2024-07-11 23:41:26.303115] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:05.407 23:41:26 -- common/autotest_common.sh@950 -- # wait 343797 00:28:05.666 23:41:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:05.666 23:41:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:05.666 23:41:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:05.666 23:41:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.666 23:41:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:05.666 23:41:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.666 23:41:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.666 23:41:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.211 23:41:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:08.211 00:28:08.211 real 0m7.168s 00:28:08.211 user 0m9.478s 00:28:08.211 sys 0m2.571s 00:28:08.211 23:41:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:08.211 23:41:28 -- common/autotest_common.sh@10 -- # set +x 00:28:08.211 ************************************ 00:28:08.211 END TEST nvmf_identify 00:28:08.211 ************************************ 00:28:08.211 23:41:28 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:08.211 23:41:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:08.211 23:41:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:08.211 23:41:28 -- common/autotest_common.sh@10 -- # set +x 00:28:08.211 ************************************ 00:28:08.211 START TEST nvmf_perf 00:28:08.211 ************************************ 00:28:08.211 23:41:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:08.211 * Looking for test storage... 00:28:08.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:08.211 23:41:28 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.211 23:41:28 -- nvmf/common.sh@7 -- # uname -s 00:28:08.211 23:41:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.211 23:41:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.211 23:41:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.211 23:41:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.211 23:41:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.211 23:41:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.211 23:41:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.211 23:41:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.211 23:41:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.211 23:41:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.211 23:41:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:08.211 23:41:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:08.211 23:41:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.211 23:41:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.211 23:41:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.211 23:41:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.211 23:41:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.211 23:41:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.211 23:41:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.211 23:41:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.211 23:41:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.211 23:41:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.211 23:41:28 -- paths/export.sh@5 -- # export PATH 00:28:08.211 23:41:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.211 23:41:28 -- nvmf/common.sh@46 -- # : 0 00:28:08.211 23:41:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:08.211 23:41:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:08.211 23:41:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:08.211 23:41:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.211 23:41:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.211 23:41:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:08.211 23:41:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:08.211 23:41:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:08.211 23:41:28 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:08.211 23:41:28 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:08.211 23:41:28 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:08.211 23:41:28 -- host/perf.sh@17 -- # nvmftestinit 00:28:08.211 23:41:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:08.211 23:41:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.211 23:41:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:08.211 23:41:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:08.211 23:41:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:08.211 23:41:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.211 23:41:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.211 23:41:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.211 23:41:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:08.211 23:41:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:08.212 23:41:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:08.212 23:41:28 -- common/autotest_common.sh@10 -- # set +x 00:28:10.749 23:41:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:10.749 23:41:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:10.749 23:41:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:10.749 23:41:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:10.749 23:41:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:10.749 23:41:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:10.749 23:41:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:10.749 23:41:31 -- nvmf/common.sh@294 -- # net_devs=() 00:28:10.749 23:41:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:10.749 23:41:31 -- nvmf/common.sh@295 -- # e810=() 00:28:10.749 23:41:31 -- nvmf/common.sh@295 -- # local -ga e810 00:28:10.749 23:41:31 -- nvmf/common.sh@296 -- # x722=() 00:28:10.749 23:41:31 -- nvmf/common.sh@296 -- # local -ga x722 00:28:10.749 23:41:31 -- nvmf/common.sh@297 -- # mlx=() 00:28:10.749 23:41:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:10.749 23:41:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.749 23:41:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.749 23:41:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.749 23:41:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.749 23:41:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.749 23:41:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.749 23:41:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.749 23:41:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.749 23:41:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.749 23:41:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.749 23:41:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.749 23:41:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:10.749 23:41:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:10.749 23:41:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:10.749 23:41:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:10.749 23:41:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:10.749 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:10.749 23:41:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:10.749 23:41:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:10.749 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:10.749 23:41:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:10.749 23:41:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:10.749 23:41:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.749 23:41:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:10.749 23:41:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.749 23:41:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:10.749 Found net devices under 0000:84:00.0: cvl_0_0 00:28:10.749 23:41:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.749 23:41:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:10.749 23:41:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.749 23:41:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:10.749 23:41:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.749 23:41:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:10.749 Found net devices under 0000:84:00.1: cvl_0_1 00:28:10.749 23:41:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.749 23:41:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:10.749 23:41:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:10.749 23:41:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:10.749 23:41:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:10.749 23:41:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.749 23:41:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.749 23:41:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.749 23:41:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:10.749 23:41:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.749 23:41:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.749 23:41:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:10.749 23:41:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.749 23:41:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.749 23:41:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:10.749 23:41:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:10.749 23:41:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.750 23:41:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.750 23:41:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.750 23:41:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.750 23:41:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:10.750 23:41:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.750 23:41:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.750 23:41:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.750 23:41:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:10.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:28:10.750 00:28:10.750 --- 10.0.0.2 ping statistics --- 00:28:10.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.750 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:28:10.750 23:41:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:28:10.750 00:28:10.750 --- 10.0.0.1 ping statistics --- 00:28:10.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.750 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:28:10.750 23:41:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.750 23:41:31 -- nvmf/common.sh@410 -- # return 0 00:28:10.750 23:41:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:10.750 23:41:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.750 23:41:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:10.750 23:41:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:10.750 23:41:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.750 23:41:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:10.750 23:41:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:10.750 23:41:31 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:10.750 23:41:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:10.750 23:41:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:10.750 23:41:31 -- common/autotest_common.sh@10 -- # set +x 00:28:10.750 23:41:31 -- nvmf/common.sh@469 -- # nvmfpid=346043 00:28:10.750 23:41:31 -- nvmf/common.sh@470 -- # waitforlisten 346043 00:28:10.750 23:41:31 -- common/autotest_common.sh@819 -- # '[' -z 346043 ']' 00:28:10.750 23:41:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:10.750 23:41:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.750 23:41:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:10.750 23:41:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.750 23:41:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:10.750 23:41:31 -- common/autotest_common.sh@10 -- # set +x 00:28:10.750 [2024-07-11 23:41:31.498289] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:10.750 [2024-07-11 23:41:31.498467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.750 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.750 [2024-07-11 23:41:31.628671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:11.007 [2024-07-11 23:41:31.727536] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:11.007 [2024-07-11 23:41:31.727693] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.007 [2024-07-11 23:41:31.727713] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.007 [2024-07-11 23:41:31.727727] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.008 [2024-07-11 23:41:31.727790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.008 [2024-07-11 23:41:31.727848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.008 [2024-07-11 23:41:31.727909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:11.008 [2024-07-11 23:41:31.727912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.941 23:41:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:11.941 23:41:32 -- common/autotest_common.sh@852 -- # return 0 00:28:11.941 23:41:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:11.941 23:41:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:11.941 23:41:32 -- common/autotest_common.sh@10 -- # set +x 00:28:11.941 23:41:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.941 23:41:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:11.941 23:41:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:15.224 23:41:35 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:15.224 23:41:35 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:15.482 23:41:36 -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:28:15.482 23:41:36 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:15.741 23:41:36 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:15.741 23:41:36 -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:28:15.741 23:41:36 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:15.741 23:41:36 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:15.741 23:41:36 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:16.311 [2024-07-11 23:41:37.097277] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.311 23:41:37 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:16.569 23:41:37 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:16.569 23:41:37 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:16.829 23:41:37 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:16.829 23:41:37 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:17.428 23:41:38 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.686 [2024-07-11 23:41:38.611037] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.686 23:41:38 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:18.252 23:41:38 -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:28:18.252 23:41:38 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:28:18.252 23:41:38 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:18.252 23:41:38 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:28:19.629 Initializing NVMe Controllers 00:28:19.629 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:28:19.629 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:28:19.629 Initialization complete. Launching workers. 00:28:19.629 ======================================================== 00:28:19.629 Latency(us) 00:28:19.629 Device Information : IOPS MiB/s Average min max 00:28:19.629 PCIE (0000:82:00.0) NSID 1 from core 0: 85963.03 335.79 371.79 43.01 7256.34 00:28:19.629 ======================================================== 00:28:19.629 Total : 85963.03 335.79 371.79 43.01 7256.34 00:28:19.629 00:28:19.629 23:41:40 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:19.629 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.565 Initializing NVMe Controllers 00:28:20.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:20.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:20.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:20.565 Initialization complete. Launching workers. 00:28:20.565 ======================================================== 00:28:20.565 Latency(us) 00:28:20.565 Device Information : IOPS MiB/s Average min max 00:28:20.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.00 0.31 12869.56 243.77 46307.32 00:28:20.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.00 0.20 20109.59 5991.84 57832.97 00:28:20.565 ======================================================== 00:28:20.565 Total : 129.00 0.50 15675.77 243.77 57832.97 00:28:20.565 00:28:20.565 23:41:41 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:20.565 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.944 Initializing NVMe Controllers 00:28:21.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:21.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:21.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:21.944 Initialization complete. Launching workers. 00:28:21.944 ======================================================== 00:28:21.944 Latency(us) 00:28:21.944 Device Information : IOPS MiB/s Average min max 00:28:21.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7654.52 29.90 4189.88 588.55 11563.72 00:28:21.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3875.25 15.14 8268.14 6235.37 19559.70 00:28:21.944 ======================================================== 00:28:21.944 Total : 11529.77 45.04 5560.62 588.55 19559.70 00:28:21.944 00:28:21.944 23:41:42 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:21.944 23:41:42 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:21.944 23:41:42 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:21.944 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.252 Initializing NVMe Controllers 00:28:25.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.252 Controller IO queue size 128, less than required. 00:28:25.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:25.252 Controller IO queue size 128, less than required. 00:28:25.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:25.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:25.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:25.252 Initialization complete. Launching workers. 00:28:25.252 ======================================================== 00:28:25.252 Latency(us) 00:28:25.252 Device Information : IOPS MiB/s Average min max 00:28:25.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 911.97 227.99 146159.34 79585.37 188496.78 00:28:25.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 612.98 153.24 221571.76 94278.77 339115.51 00:28:25.252 ======================================================== 00:28:25.252 Total : 1524.95 381.24 176472.66 79585.37 339115.51 00:28:25.252 00:28:25.252 23:41:45 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:25.252 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.252 No valid NVMe controllers or AIO or URING devices found 00:28:25.252 Initializing NVMe Controllers 00:28:25.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.252 Controller IO queue size 128, less than required. 00:28:25.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:25.253 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:25.253 Controller IO queue size 128, less than required. 00:28:25.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:25.253 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:25.253 WARNING: Some requested NVMe devices were skipped 00:28:25.253 23:41:45 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:25.253 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.782 Initializing NVMe Controllers 00:28:27.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.782 Controller IO queue size 128, less than required. 00:28:27.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.782 Controller IO queue size 128, less than required. 00:28:27.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:27.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:27.782 Initialization complete. Launching workers. 00:28:27.782 00:28:27.782 ==================== 00:28:27.782 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:27.782 TCP transport: 00:28:27.782 polls: 29703 00:28:27.782 idle_polls: 9773 00:28:27.782 sock_completions: 19930 00:28:27.782 nvme_completions: 3312 00:28:27.782 submitted_requests: 5105 00:28:27.782 queued_requests: 1 00:28:27.782 00:28:27.782 ==================== 00:28:27.782 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:27.782 TCP transport: 00:28:27.782 polls: 32257 00:28:27.782 idle_polls: 11960 00:28:27.782 sock_completions: 20297 00:28:27.782 nvme_completions: 3403 00:28:27.782 submitted_requests: 5199 00:28:27.782 queued_requests: 1 00:28:27.782 ======================================================== 00:28:27.782 Latency(us) 00:28:27.782 Device Information : IOPS MiB/s Average min max 00:28:27.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 891.49 222.87 148214.92 87748.69 220789.19 00:28:27.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 914.49 228.62 146165.68 69039.11 215191.93 00:28:27.782 ======================================================== 00:28:27.782 Total : 1805.98 451.50 147177.25 69039.11 220789.19 00:28:27.782 00:28:27.782 23:41:48 -- host/perf.sh@66 -- # sync 00:28:27.783 23:41:48 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:28.041 23:41:48 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:28.041 23:41:48 -- host/perf.sh@71 -- # '[' -n 0000:82:00.0 ']' 00:28:28.041 23:41:48 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:31.325 23:41:52 -- host/perf.sh@72 -- # ls_guid=959d5ed9-352f-4844-9990-222df9638b56 00:28:31.325 23:41:52 -- host/perf.sh@73 -- # get_lvs_free_mb 959d5ed9-352f-4844-9990-222df9638b56 00:28:31.325 23:41:52 -- common/autotest_common.sh@1343 -- # local lvs_uuid=959d5ed9-352f-4844-9990-222df9638b56 00:28:31.325 23:41:52 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:31.325 23:41:52 -- common/autotest_common.sh@1345 -- # local fc 00:28:31.325 23:41:52 -- common/autotest_common.sh@1346 -- # local cs 00:28:31.325 23:41:52 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:31.582 23:41:52 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:31.583 { 00:28:31.583 "uuid": "959d5ed9-352f-4844-9990-222df9638b56", 00:28:31.583 "name": "lvs_0", 00:28:31.583 "base_bdev": "Nvme0n1", 00:28:31.583 "total_data_clusters": 238234, 00:28:31.583 "free_clusters": 238234, 00:28:31.583 "block_size": 512, 00:28:31.583 "cluster_size": 4194304 00:28:31.583 } 00:28:31.583 ]' 00:28:31.583 23:41:52 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="959d5ed9-352f-4844-9990-222df9638b56") .free_clusters' 00:28:31.841 23:41:52 -- common/autotest_common.sh@1348 -- # fc=238234 00:28:31.841 23:41:52 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="959d5ed9-352f-4844-9990-222df9638b56") .cluster_size' 00:28:31.841 23:41:52 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:31.841 23:41:52 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:28:31.841 23:41:52 -- common/autotest_common.sh@1353 -- # echo 952936 00:28:31.841 952936 00:28:31.841 23:41:52 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:31.841 23:41:52 -- host/perf.sh@78 -- # free_mb=20480 00:28:31.841 23:41:52 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 959d5ed9-352f-4844-9990-222df9638b56 lbd_0 20480 00:28:32.773 23:41:53 -- host/perf.sh@80 -- # lb_guid=40dc506e-cd06-4ecb-827b-efdec6647865 00:28:32.773 23:41:53 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 40dc506e-cd06-4ecb-827b-efdec6647865 lvs_n_0 00:28:33.708 23:41:54 -- host/perf.sh@83 -- # ls_nested_guid=a5ec1ba6-8f06-41c3-82ab-3d3ec8b3a7b0 00:28:33.708 23:41:54 -- host/perf.sh@84 -- # get_lvs_free_mb a5ec1ba6-8f06-41c3-82ab-3d3ec8b3a7b0 00:28:33.708 23:41:54 -- common/autotest_common.sh@1343 -- # local lvs_uuid=a5ec1ba6-8f06-41c3-82ab-3d3ec8b3a7b0 00:28:33.708 23:41:54 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:33.708 23:41:54 -- common/autotest_common.sh@1345 -- # local fc 00:28:33.708 23:41:54 -- common/autotest_common.sh@1346 -- # local cs 00:28:33.708 23:41:54 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:33.708 23:41:54 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:33.708 { 00:28:33.708 "uuid": "959d5ed9-352f-4844-9990-222df9638b56", 00:28:33.708 "name": "lvs_0", 00:28:33.708 "base_bdev": "Nvme0n1", 00:28:33.708 "total_data_clusters": 238234, 00:28:33.708 "free_clusters": 233114, 00:28:33.708 "block_size": 512, 00:28:33.708 "cluster_size": 4194304 00:28:33.708 }, 00:28:33.708 { 00:28:33.708 "uuid": "a5ec1ba6-8f06-41c3-82ab-3d3ec8b3a7b0", 00:28:33.708 "name": "lvs_n_0", 00:28:33.708 "base_bdev": "40dc506e-cd06-4ecb-827b-efdec6647865", 00:28:33.708 "total_data_clusters": 5114, 00:28:33.708 "free_clusters": 5114, 00:28:33.708 "block_size": 512, 00:28:33.708 "cluster_size": 4194304 00:28:33.708 } 00:28:33.708 ]' 00:28:33.708 23:41:54 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="a5ec1ba6-8f06-41c3-82ab-3d3ec8b3a7b0") .free_clusters' 00:28:33.708 23:41:54 -- common/autotest_common.sh@1348 -- # fc=5114 00:28:33.708 23:41:54 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="a5ec1ba6-8f06-41c3-82ab-3d3ec8b3a7b0") .cluster_size' 00:28:33.966 23:41:54 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:33.966 23:41:54 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:28:33.966 23:41:54 -- common/autotest_common.sh@1353 -- # echo 20456 00:28:33.966 20456 00:28:33.966 23:41:54 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:33.966 23:41:54 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5ec1ba6-8f06-41c3-82ab-3d3ec8b3a7b0 lbd_nest_0 20456 00:28:34.224 23:41:55 -- host/perf.sh@88 -- # lb_nested_guid=824d827e-9aa6-426c-b513-dd3aba1c8668 00:28:34.224 23:41:55 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:34.482 23:41:55 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:34.482 23:41:55 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 824d827e-9aa6-426c-b513-dd3aba1c8668 00:28:34.740 23:41:55 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.000 23:41:55 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:35.000 23:41:55 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:35.000 23:41:55 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:35.000 23:41:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:35.000 23:41:55 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:35.000 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.202 Initializing NVMe Controllers 00:28:47.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:47.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:47.202 Initialization complete. Launching workers. 00:28:47.202 ======================================================== 00:28:47.202 Latency(us) 00:28:47.202 Device Information : IOPS MiB/s Average min max 00:28:47.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.20 0.02 22147.02 222.79 46669.63 00:28:47.202 ======================================================== 00:28:47.202 Total : 45.20 0.02 22147.02 222.79 46669.63 00:28:47.202 00:28:47.202 23:42:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:47.202 23:42:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:47.202 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.230 Initializing NVMe Controllers 00:28:57.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:57.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:57.230 Initialization complete. Launching workers. 00:28:57.230 ======================================================== 00:28:57.230 Latency(us) 00:28:57.230 Device Information : IOPS MiB/s Average min max 00:28:57.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.30 10.41 12010.96 4995.68 47899.64 00:28:57.230 ======================================================== 00:28:57.230 Total : 83.30 10.41 12010.96 4995.68 47899.64 00:28:57.230 00:28:57.230 23:42:16 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:57.230 23:42:16 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:57.230 23:42:16 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:57.230 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.198 Initializing NVMe Controllers 00:29:07.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:07.199 Initialization complete. Launching workers. 00:29:07.199 ======================================================== 00:29:07.199 Latency(us) 00:29:07.199 Device Information : IOPS MiB/s Average min max 00:29:07.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6785.72 3.31 4717.05 360.88 12135.13 00:29:07.199 ======================================================== 00:29:07.199 Total : 6785.72 3.31 4717.05 360.88 12135.13 00:29:07.199 00:29:07.199 23:42:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:07.199 23:42:26 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.199 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.164 Initializing NVMe Controllers 00:29:17.164 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:17.164 Initialization complete. Launching workers. 00:29:17.164 ======================================================== 00:29:17.164 Latency(us) 00:29:17.164 Device Information : IOPS MiB/s Average min max 00:29:17.164 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1570.39 196.30 20400.39 1396.13 49906.58 00:29:17.164 ======================================================== 00:29:17.164 Total : 1570.39 196.30 20400.39 1396.13 49906.58 00:29:17.164 00:29:17.164 23:42:37 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:17.164 23:42:37 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:17.164 23:42:37 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:17.164 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.116 Initializing NVMe Controllers 00:29:27.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:27.116 Controller IO queue size 128, less than required. 00:29:27.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:27.116 Initialization complete. Launching workers. 00:29:27.116 ======================================================== 00:29:27.116 Latency(us) 00:29:27.116 Device Information : IOPS MiB/s Average min max 00:29:27.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12071.85 5.89 10606.86 1685.46 26693.77 00:29:27.116 ======================================================== 00:29:27.116 Total : 12071.85 5.89 10606.86 1685.46 26693.77 00:29:27.116 00:29:27.116 23:42:47 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:27.116 23:42:47 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:27.116 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.075 Initializing NVMe Controllers 00:29:37.075 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.075 Controller IO queue size 128, less than required. 00:29:37.075 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:37.075 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:37.075 Initialization complete. Launching workers. 00:29:37.075 ======================================================== 00:29:37.075 Latency(us) 00:29:37.075 Device Information : IOPS MiB/s Average min max 00:29:37.075 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1212.18 151.52 106793.54 15267.01 225131.96 00:29:37.075 ======================================================== 00:29:37.075 Total : 1212.18 151.52 106793.54 15267.01 225131.96 00:29:37.075 00:29:37.075 23:42:57 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:37.332 23:42:58 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 824d827e-9aa6-426c-b513-dd3aba1c8668 00:29:38.315 23:42:58 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:38.573 23:42:59 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 40dc506e-cd06-4ecb-827b-efdec6647865 00:29:39.138 23:42:59 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:39.396 23:43:00 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:39.396 23:43:00 -- host/perf.sh@114 -- # nvmftestfini 00:29:39.396 23:43:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:39.396 23:43:00 -- nvmf/common.sh@116 -- # sync 00:29:39.396 23:43:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:39.396 23:43:00 -- nvmf/common.sh@119 -- # set +e 00:29:39.396 23:43:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:39.396 23:43:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:39.396 rmmod nvme_tcp 00:29:39.396 rmmod nvme_fabrics 00:29:39.396 rmmod nvme_keyring 00:29:39.396 23:43:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:39.396 23:43:00 -- nvmf/common.sh@123 -- # set -e 00:29:39.396 23:43:00 -- nvmf/common.sh@124 -- # return 0 00:29:39.396 23:43:00 -- nvmf/common.sh@477 -- # '[' -n 346043 ']' 00:29:39.396 23:43:00 -- nvmf/common.sh@478 -- # killprocess 346043 00:29:39.396 23:43:00 -- common/autotest_common.sh@926 -- # '[' -z 346043 ']' 00:29:39.396 23:43:00 -- common/autotest_common.sh@930 -- # kill -0 346043 00:29:39.396 23:43:00 -- common/autotest_common.sh@931 -- # uname 00:29:39.396 23:43:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:39.396 23:43:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 346043 00:29:39.396 23:43:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:39.396 23:43:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:39.396 23:43:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 346043' 00:29:39.396 killing process with pid 346043 00:29:39.396 23:43:00 -- common/autotest_common.sh@945 -- # kill 346043 00:29:39.396 23:43:00 -- common/autotest_common.sh@950 -- # wait 346043 00:29:41.295 23:43:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:41.295 23:43:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:41.295 23:43:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:41.295 23:43:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:41.295 23:43:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:41.295 23:43:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.295 23:43:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:41.295 23:43:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.202 23:43:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:43.202 00:29:43.202 real 1m35.180s 00:29:43.202 user 5m53.229s 00:29:43.202 sys 0m17.551s 00:29:43.202 23:43:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.202 23:43:03 -- common/autotest_common.sh@10 -- # set +x 00:29:43.202 ************************************ 00:29:43.202 END TEST nvmf_perf 00:29:43.202 ************************************ 00:29:43.202 23:43:03 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:43.202 23:43:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:43.202 23:43:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:43.202 23:43:03 -- common/autotest_common.sh@10 -- # set +x 00:29:43.202 ************************************ 00:29:43.202 START TEST nvmf_fio_host 00:29:43.202 ************************************ 00:29:43.202 23:43:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:43.202 * Looking for test storage... 00:29:43.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:43.202 23:43:03 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.202 23:43:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.202 23:43:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.202 23:43:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.203 23:43:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.203 23:43:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.203 23:43:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.203 23:43:03 -- paths/export.sh@5 -- # export PATH 00:29:43.203 23:43:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.203 23:43:03 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.203 23:43:03 -- nvmf/common.sh@7 -- # uname -s 00:29:43.203 23:43:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.203 23:43:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.203 23:43:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.203 23:43:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.203 23:43:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.203 23:43:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.203 23:43:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.203 23:43:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.203 23:43:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.203 23:43:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.203 23:43:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:43.203 23:43:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:43.203 23:43:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.203 23:43:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.203 23:43:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.203 23:43:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.203 23:43:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.203 23:43:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.203 23:43:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.203 23:43:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.203 23:43:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.203 23:43:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.203 23:43:03 -- paths/export.sh@5 -- # export PATH 00:29:43.203 23:43:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.203 23:43:03 -- nvmf/common.sh@46 -- # : 0 00:29:43.203 23:43:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:43.203 23:43:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:43.203 23:43:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:43.203 23:43:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.203 23:43:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.203 23:43:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:43.203 23:43:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:43.203 23:43:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:43.203 23:43:03 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:43.203 23:43:03 -- host/fio.sh@14 -- # nvmftestinit 00:29:43.203 23:43:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:43.203 23:43:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.203 23:43:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:43.203 23:43:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:43.203 23:43:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:43.203 23:43:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.203 23:43:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:43.203 23:43:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.203 23:43:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:43.203 23:43:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:43.203 23:43:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:43.203 23:43:03 -- common/autotest_common.sh@10 -- # set +x 00:29:45.740 23:43:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:45.740 23:43:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:45.740 23:43:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:45.740 23:43:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:45.740 23:43:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:45.740 23:43:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:45.740 23:43:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:45.740 23:43:06 -- nvmf/common.sh@294 -- # net_devs=() 00:29:45.740 23:43:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:45.740 23:43:06 -- nvmf/common.sh@295 -- # e810=() 00:29:45.740 23:43:06 -- nvmf/common.sh@295 -- # local -ga e810 00:29:45.740 23:43:06 -- nvmf/common.sh@296 -- # x722=() 00:29:45.740 23:43:06 -- nvmf/common.sh@296 -- # local -ga x722 00:29:45.740 23:43:06 -- nvmf/common.sh@297 -- # mlx=() 00:29:45.740 23:43:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:45.740 23:43:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.740 23:43:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.740 23:43:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.740 23:43:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.740 23:43:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.740 23:43:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.740 23:43:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.740 23:43:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.740 23:43:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.740 23:43:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.740 23:43:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.740 23:43:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:45.740 23:43:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:45.740 23:43:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:45.740 23:43:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:45.740 23:43:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:45.740 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:45.740 23:43:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:45.740 23:43:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:45.740 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:45.740 23:43:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:45.740 23:43:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:45.740 23:43:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.740 23:43:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:45.740 23:43:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.740 23:43:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:45.740 Found net devices under 0000:84:00.0: cvl_0_0 00:29:45.740 23:43:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.740 23:43:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:45.740 23:43:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.740 23:43:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:45.740 23:43:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.740 23:43:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:45.740 Found net devices under 0000:84:00.1: cvl_0_1 00:29:45.740 23:43:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.740 23:43:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:45.740 23:43:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:45.740 23:43:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:45.740 23:43:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:45.740 23:43:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:45.740 23:43:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:45.740 23:43:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:45.740 23:43:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:45.740 23:43:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:45.740 23:43:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:45.740 23:43:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:45.740 23:43:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:45.741 23:43:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.741 23:43:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:45.741 23:43:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:45.741 23:43:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:45.741 23:43:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:45.741 23:43:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:45.741 23:43:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:45.741 23:43:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:45.741 23:43:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:45.741 23:43:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:45.741 23:43:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:45.741 23:43:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:45.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:45.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:29:45.741 00:29:45.741 --- 10.0.0.2 ping statistics --- 00:29:45.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.741 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:29:45.741 23:43:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:45.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:45.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:29:45.741 00:29:45.741 --- 10.0.0.1 ping statistics --- 00:29:45.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.741 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:29:45.741 23:43:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:45.741 23:43:06 -- nvmf/common.sh@410 -- # return 0 00:29:45.741 23:43:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:45.741 23:43:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:45.741 23:43:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:45.741 23:43:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:45.741 23:43:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:45.741 23:43:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:45.741 23:43:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:45.741 23:43:06 -- host/fio.sh@16 -- # [[ y != y ]] 00:29:45.741 23:43:06 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:45.741 23:43:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:45.741 23:43:06 -- common/autotest_common.sh@10 -- # set +x 00:29:46.000 23:43:06 -- host/fio.sh@24 -- # nvmfpid=359379 00:29:46.000 23:43:06 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:46.000 23:43:06 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:46.000 23:43:06 -- host/fio.sh@28 -- # waitforlisten 359379 00:29:46.000 23:43:06 -- common/autotest_common.sh@819 -- # '[' -z 359379 ']' 00:29:46.000 23:43:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.000 23:43:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:46.000 23:43:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.000 23:43:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:46.000 23:43:06 -- common/autotest_common.sh@10 -- # set +x 00:29:46.000 [2024-07-11 23:43:06.739438] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:46.000 [2024-07-11 23:43:06.739530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.000 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.000 [2024-07-11 23:43:06.819637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:46.000 [2024-07-11 23:43:06.913348] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:46.000 [2024-07-11 23:43:06.913495] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.000 [2024-07-11 23:43:06.913512] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.000 [2024-07-11 23:43:06.913524] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.000 [2024-07-11 23:43:06.913581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.000 [2024-07-11 23:43:06.913605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.000 [2024-07-11 23:43:06.913666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:46.000 [2024-07-11 23:43:06.913669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.371 23:43:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:47.371 23:43:08 -- common/autotest_common.sh@852 -- # return 0 00:29:47.371 23:43:08 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:47.628 [2024-07-11 23:43:08.371791] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.628 23:43:08 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:47.628 23:43:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:47.628 23:43:08 -- common/autotest_common.sh@10 -- # set +x 00:29:47.628 23:43:08 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:48.194 Malloc1 00:29:48.194 23:43:09 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:48.758 23:43:09 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:49.016 23:43:09 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.274 [2024-07-11 23:43:10.050979] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.274 23:43:10 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:49.532 23:43:10 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:49.532 23:43:10 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:49.532 23:43:10 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:49.532 23:43:10 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:49.532 23:43:10 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:49.532 23:43:10 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:49.532 23:43:10 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.532 23:43:10 -- common/autotest_common.sh@1320 -- # shift 00:29:49.532 23:43:10 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:49.532 23:43:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:49.532 23:43:10 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.532 23:43:10 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:49.532 23:43:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:49.532 23:43:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:49.532 23:43:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:49.532 23:43:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:49.532 23:43:10 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.532 23:43:10 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:49.532 23:43:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:49.532 23:43:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:49.532 23:43:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:49.532 23:43:10 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:49.532 23:43:10 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:49.789 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:49.789 fio-3.35 00:29:49.789 Starting 1 thread 00:29:49.789 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.316 00:29:52.316 test: (groupid=0, jobs=1): err= 0: pid=360011: Thu Jul 11 23:43:12 2024 00:29:52.316 read: IOPS=9537, BW=37.3MiB/s (39.1MB/s)(74.7MiB/2006msec) 00:29:52.316 slat (usec): min=2, max=164, avg= 3.06, stdev= 2.14 00:29:52.316 clat (usec): min=3187, max=12638, avg=7395.47, stdev=547.63 00:29:52.316 lat (usec): min=3207, max=12641, avg=7398.53, stdev=547.55 00:29:52.316 clat percentiles (usec): 00:29:52.316 | 1.00th=[ 6194], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 6980], 00:29:52.316 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:29:52.316 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8291], 00:29:52.316 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[10421], 99.95th=[11863], 00:29:52.316 | 99.99th=[12387] 00:29:52.316 bw ( KiB/s): min=37080, max=38848, per=99.95%, avg=38134.00, stdev=748.26, samples=4 00:29:52.316 iops : min= 9270, max= 9712, avg=9533.50, stdev=187.06, samples=4 00:29:52.316 write: IOPS=9547, BW=37.3MiB/s (39.1MB/s)(74.8MiB/2006msec); 0 zone resets 00:29:52.316 slat (usec): min=2, max=117, avg= 3.27, stdev= 1.87 00:29:52.316 clat (usec): min=1268, max=11911, avg=5968.92, stdev=480.15 00:29:52.316 lat (usec): min=1276, max=11914, avg=5972.19, stdev=480.10 00:29:52.316 clat percentiles (usec): 00:29:52.316 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:29:52.316 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6063], 00:29:52.316 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:29:52.316 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 8979], 99.95th=[10814], 00:29:52.316 | 99.99th=[11863] 00:29:52.316 bw ( KiB/s): min=37904, max=38520, per=99.98%, avg=38184.00, stdev=296.40, samples=4 00:29:52.316 iops : min= 9476, max= 9630, avg=9546.00, stdev=74.10, samples=4 00:29:52.316 lat (msec) : 2=0.01%, 4=0.12%, 10=99.76%, 20=0.11% 00:29:52.316 cpu : usr=61.60%, sys=32.67%, ctx=39, majf=0, minf=5 00:29:52.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:52.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:52.317 issued rwts: total=19133,19153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:52.317 00:29:52.317 Run status group 0 (all jobs): 00:29:52.317 READ: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=74.7MiB (78.4MB), run=2006-2006msec 00:29:52.317 WRITE: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=74.8MiB (78.5MB), run=2006-2006msec 00:29:52.317 23:43:12 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:52.317 23:43:12 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:52.317 23:43:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:52.317 23:43:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:52.317 23:43:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:52.317 23:43:12 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:52.317 23:43:12 -- common/autotest_common.sh@1320 -- # shift 00:29:52.317 23:43:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:52.317 23:43:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.317 23:43:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:52.317 23:43:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:52.317 23:43:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:52.317 23:43:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:52.317 23:43:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:52.317 23:43:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.317 23:43:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:52.317 23:43:12 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:52.317 23:43:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:52.317 23:43:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:52.317 23:43:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:52.317 23:43:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:52.317 23:43:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:52.317 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:52.317 fio-3.35 00:29:52.317 Starting 1 thread 00:29:52.317 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.844 00:29:54.844 test: (groupid=0, jobs=1): err= 0: pid=360352: Thu Jul 11 23:43:15 2024 00:29:54.844 read: IOPS=6973, BW=109MiB/s (114MB/s)(219MiB/2007msec) 00:29:54.844 slat (usec): min=3, max=264, avg= 5.56, stdev= 3.70 00:29:54.844 clat (usec): min=2173, max=27068, avg=11097.38, stdev=2960.27 00:29:54.844 lat (usec): min=2178, max=27076, avg=11102.94, stdev=2961.02 00:29:54.844 clat percentiles (usec): 00:29:54.844 | 1.00th=[ 5866], 5.00th=[ 6980], 10.00th=[ 7635], 20.00th=[ 8586], 00:29:54.844 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10814], 60.00th=[11600], 00:29:54.844 | 70.00th=[12387], 80.00th=[13304], 90.00th=[15008], 95.00th=[15926], 00:29:54.844 | 99.00th=[20055], 99.50th=[22938], 99.90th=[26346], 99.95th=[26608], 00:29:54.844 | 99.99th=[27132] 00:29:54.844 bw ( KiB/s): min=36608, max=68352, per=50.09%, avg=55888.00, stdev=14612.96, samples=4 00:29:54.844 iops : min= 2288, max= 4272, avg=3493.00, stdev=913.31, samples=4 00:29:54.844 write: IOPS=4182, BW=65.3MiB/s (68.5MB/s)(114MiB/1750msec); 0 zone resets 00:29:54.844 slat (usec): min=39, max=348, avg=49.03, stdev=13.44 00:29:54.844 clat (usec): min=5538, max=20111, avg=12480.31, stdev=1851.06 00:29:54.844 lat (usec): min=5585, max=20164, avg=12529.34, stdev=1853.15 00:29:54.844 clat percentiles (usec): 00:29:54.844 | 1.00th=[ 8455], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10814], 00:29:54.844 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12518], 60.00th=[12911], 00:29:54.844 | 70.00th=[13435], 80.00th=[13960], 90.00th=[14877], 95.00th=[15533], 00:29:54.844 | 99.00th=[16909], 99.50th=[17433], 99.90th=[19792], 99.95th=[20055], 00:29:54.844 | 99.99th=[20055] 00:29:54.844 bw ( KiB/s): min=39168, max=70240, per=86.87%, avg=58128.00, stdev=14765.62, samples=4 00:29:54.844 iops : min= 2448, max= 4390, avg=3633.00, stdev=922.85, samples=4 00:29:54.844 lat (msec) : 4=0.10%, 10=28.46%, 20=70.76%, 50=0.69% 00:29:54.844 cpu : usr=82.90%, sys=14.91%, ctx=59, majf=0, minf=1 00:29:54.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:29:54.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:54.844 issued rwts: total=13995,7319,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.844 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:54.844 00:29:54.844 Run status group 0 (all jobs): 00:29:54.844 READ: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=219MiB (229MB), run=2007-2007msec 00:29:54.844 WRITE: bw=65.3MiB/s (68.5MB/s), 65.3MiB/s-65.3MiB/s (68.5MB/s-68.5MB/s), io=114MiB (120MB), run=1750-1750msec 00:29:54.844 23:43:15 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.408 23:43:16 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:55.408 23:43:16 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:55.408 23:43:16 -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:55.408 23:43:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:55.408 23:43:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:55.408 23:43:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:55.408 23:43:16 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:55.408 23:43:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:55.408 23:43:16 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:55.408 23:43:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:29:55.408 23:43:16 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 -i 10.0.0.2 00:29:58.683 Nvme0n1 00:29:58.683 23:43:19 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:01.964 23:43:22 -- host/fio.sh@53 -- # ls_guid=96b505a2-3db4-4ea5-a3cc-4d8763c8770d 00:30:01.964 23:43:22 -- host/fio.sh@54 -- # get_lvs_free_mb 96b505a2-3db4-4ea5-a3cc-4d8763c8770d 00:30:01.964 23:43:22 -- common/autotest_common.sh@1343 -- # local lvs_uuid=96b505a2-3db4-4ea5-a3cc-4d8763c8770d 00:30:01.964 23:43:22 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:01.964 23:43:22 -- common/autotest_common.sh@1345 -- # local fc 00:30:01.964 23:43:22 -- common/autotest_common.sh@1346 -- # local cs 00:30:01.964 23:43:22 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:01.964 23:43:22 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:01.964 { 00:30:01.964 "uuid": "96b505a2-3db4-4ea5-a3cc-4d8763c8770d", 00:30:01.964 "name": "lvs_0", 00:30:01.964 "base_bdev": "Nvme0n1", 00:30:01.964 "total_data_clusters": 930, 00:30:01.964 "free_clusters": 930, 00:30:01.964 "block_size": 512, 00:30:01.964 "cluster_size": 1073741824 00:30:01.964 } 00:30:01.964 ]' 00:30:01.964 23:43:22 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="96b505a2-3db4-4ea5-a3cc-4d8763c8770d") .free_clusters' 00:30:01.964 23:43:22 -- common/autotest_common.sh@1348 -- # fc=930 00:30:01.964 23:43:22 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="96b505a2-3db4-4ea5-a3cc-4d8763c8770d") .cluster_size' 00:30:02.224 23:43:22 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:30:02.225 23:43:22 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:30:02.225 23:43:22 -- common/autotest_common.sh@1353 -- # echo 952320 00:30:02.225 952320 00:30:02.225 23:43:22 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:02.787 24b87c92-8ae7-44c3-bf6f-fc88e96b0d3b 00:30:02.787 23:43:23 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:03.352 23:43:24 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:03.917 23:43:24 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:04.174 23:43:25 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:04.174 23:43:25 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:04.174 23:43:25 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:04.174 23:43:25 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:04.174 23:43:25 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:04.174 23:43:25 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.174 23:43:25 -- common/autotest_common.sh@1320 -- # shift 00:30:04.174 23:43:25 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:04.174 23:43:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.174 23:43:25 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.174 23:43:25 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:04.174 23:43:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:04.432 23:43:25 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:04.432 23:43:25 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:04.432 23:43:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.432 23:43:25 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.432 23:43:25 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:04.432 23:43:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:04.432 23:43:25 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:04.432 23:43:25 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:04.432 23:43:25 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:04.432 23:43:25 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:04.432 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:04.432 fio-3.35 00:30:04.432 Starting 1 thread 00:30:04.690 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.215 00:30:07.215 test: (groupid=0, jobs=1): err= 0: pid=361827: Thu Jul 11 23:43:27 2024 00:30:07.215 read: IOPS=6504, BW=25.4MiB/s (26.6MB/s)(51.0MiB/2007msec) 00:30:07.215 slat (usec): min=2, max=156, avg= 2.97, stdev= 2.08 00:30:07.215 clat (usec): min=867, max=170914, avg=10806.48, stdev=11233.83 00:30:07.215 lat (usec): min=870, max=170950, avg=10809.45, stdev=11234.11 00:30:07.215 clat percentiles (msec): 00:30:07.215 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:30:07.215 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:30:07.215 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 12], 00:30:07.215 | 99.00th=[ 12], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:07.215 | 99.99th=[ 171] 00:30:07.215 bw ( KiB/s): min=18328, max=28696, per=99.80%, avg=25968.00, stdev=5099.50, samples=4 00:30:07.215 iops : min= 4582, max= 7174, avg=6492.00, stdev=1274.88, samples=4 00:30:07.215 write: IOPS=6512, BW=25.4MiB/s (26.7MB/s)(51.1MiB/2007msec); 0 zone resets 00:30:07.215 slat (usec): min=2, max=240, avg= 3.11, stdev= 2.39 00:30:07.215 clat (usec): min=365, max=168960, avg=8696.06, stdev=10525.82 00:30:07.215 lat (usec): min=368, max=168966, avg=8699.17, stdev=10526.22 00:30:07.215 clat percentiles (msec): 00:30:07.215 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:30:07.215 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:30:07.215 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:30:07.215 | 99.00th=[ 10], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 169], 00:30:07.215 | 99.99th=[ 169] 00:30:07.215 bw ( KiB/s): min=19304, max=28360, per=99.92%, avg=26028.00, stdev=4483.43, samples=4 00:30:07.215 iops : min= 4826, max= 7090, avg=6507.00, stdev=1120.86, samples=4 00:30:07.215 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:07.215 lat (msec) : 2=0.03%, 4=0.17%, 10=74.11%, 20=25.18%, 250=0.49% 00:30:07.215 cpu : usr=56.63%, sys=39.53%, ctx=57, majf=0, minf=5 00:30:07.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:07.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:07.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:07.215 issued rwts: total=13055,13070,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:07.215 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:07.215 00:30:07.215 Run status group 0 (all jobs): 00:30:07.215 READ: bw=25.4MiB/s (26.6MB/s), 25.4MiB/s-25.4MiB/s (26.6MB/s-26.6MB/s), io=51.0MiB (53.5MB), run=2007-2007msec 00:30:07.215 WRITE: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=51.1MiB (53.5MB), run=2007-2007msec 00:30:07.215 23:43:27 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:07.215 23:43:28 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:08.591 23:43:29 -- host/fio.sh@64 -- # ls_nested_guid=56be2d99-1d5e-4525-93a3-e6bd033bf036 00:30:08.591 23:43:29 -- host/fio.sh@65 -- # get_lvs_free_mb 56be2d99-1d5e-4525-93a3-e6bd033bf036 00:30:08.591 23:43:29 -- common/autotest_common.sh@1343 -- # local lvs_uuid=56be2d99-1d5e-4525-93a3-e6bd033bf036 00:30:08.591 23:43:29 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:08.591 23:43:29 -- common/autotest_common.sh@1345 -- # local fc 00:30:08.591 23:43:29 -- common/autotest_common.sh@1346 -- # local cs 00:30:08.591 23:43:29 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:08.848 23:43:29 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:08.848 { 00:30:08.848 "uuid": "96b505a2-3db4-4ea5-a3cc-4d8763c8770d", 00:30:08.848 "name": "lvs_0", 00:30:08.848 "base_bdev": "Nvme0n1", 00:30:08.848 "total_data_clusters": 930, 00:30:08.848 "free_clusters": 0, 00:30:08.848 "block_size": 512, 00:30:08.848 "cluster_size": 1073741824 00:30:08.848 }, 00:30:08.848 { 00:30:08.848 "uuid": "56be2d99-1d5e-4525-93a3-e6bd033bf036", 00:30:08.848 "name": "lvs_n_0", 00:30:08.848 "base_bdev": "24b87c92-8ae7-44c3-bf6f-fc88e96b0d3b", 00:30:08.848 "total_data_clusters": 237847, 00:30:08.848 "free_clusters": 237847, 00:30:08.848 "block_size": 512, 00:30:08.848 "cluster_size": 4194304 00:30:08.849 } 00:30:08.849 ]' 00:30:08.849 23:43:29 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="56be2d99-1d5e-4525-93a3-e6bd033bf036") .free_clusters' 00:30:08.849 23:43:29 -- common/autotest_common.sh@1348 -- # fc=237847 00:30:08.849 23:43:29 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="56be2d99-1d5e-4525-93a3-e6bd033bf036") .cluster_size' 00:30:08.849 23:43:29 -- common/autotest_common.sh@1349 -- # cs=4194304 00:30:08.849 23:43:29 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:30:08.849 23:43:29 -- common/autotest_common.sh@1353 -- # echo 951388 00:30:08.849 951388 00:30:08.849 23:43:29 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:09.416 9adeebf3-7a28-4cab-988f-2b0753d1a0f8 00:30:09.675 23:43:30 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:10.242 23:43:30 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:10.501 23:43:31 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:10.758 23:43:31 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:10.758 23:43:31 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:10.758 23:43:31 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:10.758 23:43:31 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:10.758 23:43:31 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:10.758 23:43:31 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.758 23:43:31 -- common/autotest_common.sh@1320 -- # shift 00:30:10.758 23:43:31 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:10.758 23:43:31 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.758 23:43:31 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.758 23:43:31 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:10.758 23:43:31 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:10.758 23:43:31 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:10.758 23:43:31 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:10.758 23:43:31 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.758 23:43:31 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.758 23:43:31 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:10.758 23:43:31 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:10.758 23:43:31 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:10.758 23:43:31 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:10.758 23:43:31 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:10.758 23:43:31 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:11.016 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:11.016 fio-3.35 00:30:11.016 Starting 1 thread 00:30:11.016 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.544 00:30:13.544 test: (groupid=0, jobs=1): err= 0: pid=362687: Thu Jul 11 23:43:34 2024 00:30:13.544 read: IOPS=6216, BW=24.3MiB/s (25.5MB/s)(48.8MiB/2009msec) 00:30:13.544 slat (usec): min=2, max=234, avg= 3.25, stdev= 2.65 00:30:13.544 clat (usec): min=4658, max=19057, avg=11378.54, stdev=928.42 00:30:13.544 lat (usec): min=4683, max=19060, avg=11381.79, stdev=928.28 00:30:13.544 clat percentiles (usec): 00:30:13.544 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10683], 00:30:13.544 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:30:13.544 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:30:13.544 | 99.00th=[13435], 99.50th=[13698], 99.90th=[16057], 99.95th=[17695], 00:30:13.544 | 99.99th=[19006] 00:30:13.544 bw ( KiB/s): min=23496, max=25352, per=99.95%, avg=24852.00, stdev=905.01, samples=4 00:30:13.544 iops : min= 5874, max= 6338, avg=6213.00, stdev=226.25, samples=4 00:30:13.544 write: IOPS=6210, BW=24.3MiB/s (25.4MB/s)(48.7MiB/2009msec); 0 zone resets 00:30:13.544 slat (usec): min=2, max=115, avg= 3.35, stdev= 1.55 00:30:13.544 clat (usec): min=2496, max=17795, avg=9092.10, stdev=858.09 00:30:13.544 lat (usec): min=2503, max=17799, avg=9095.45, stdev=858.06 00:30:13.544 clat percentiles (usec): 00:30:13.544 | 1.00th=[ 7177], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8455], 00:30:13.544 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:30:13.545 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10290], 00:30:13.545 | 99.00th=[10945], 99.50th=[11338], 99.90th=[16057], 99.95th=[17433], 00:30:13.545 | 99.99th=[17695] 00:30:13.545 bw ( KiB/s): min=24648, max=24960, per=99.93%, avg=24822.00, stdev=136.92, samples=4 00:30:13.545 iops : min= 6162, max= 6240, avg=6205.50, stdev=34.23, samples=4 00:30:13.545 lat (msec) : 4=0.04%, 10=46.84%, 20=53.12% 00:30:13.545 cpu : usr=58.22%, sys=36.90%, ctx=52, majf=0, minf=5 00:30:13.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:13.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:13.545 issued rwts: total=12488,12476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.545 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:13.545 00:30:13.545 Run status group 0 (all jobs): 00:30:13.545 READ: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=48.8MiB (51.1MB), run=2009-2009msec 00:30:13.545 WRITE: bw=24.3MiB/s (25.4MB/s), 24.3MiB/s-24.3MiB/s (25.4MB/s-25.4MB/s), io=48.7MiB (51.1MB), run=2009-2009msec 00:30:13.545 23:43:34 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:13.803 23:43:34 -- host/fio.sh@74 -- # sync 00:30:13.803 23:43:34 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:17.990 23:43:38 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:17.990 23:43:38 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:21.277 23:43:41 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:21.535 23:43:42 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:23.434 23:43:44 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:23.434 23:43:44 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:23.434 23:43:44 -- host/fio.sh@86 -- # nvmftestfini 00:30:23.434 23:43:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:23.434 23:43:44 -- nvmf/common.sh@116 -- # sync 00:30:23.434 23:43:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:23.434 23:43:44 -- nvmf/common.sh@119 -- # set +e 00:30:23.434 23:43:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:23.434 23:43:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:23.434 rmmod nvme_tcp 00:30:23.434 rmmod nvme_fabrics 00:30:23.692 rmmod nvme_keyring 00:30:23.692 23:43:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:23.692 23:43:44 -- nvmf/common.sh@123 -- # set -e 00:30:23.692 23:43:44 -- nvmf/common.sh@124 -- # return 0 00:30:23.692 23:43:44 -- nvmf/common.sh@477 -- # '[' -n 359379 ']' 00:30:23.692 23:43:44 -- nvmf/common.sh@478 -- # killprocess 359379 00:30:23.692 23:43:44 -- common/autotest_common.sh@926 -- # '[' -z 359379 ']' 00:30:23.692 23:43:44 -- common/autotest_common.sh@930 -- # kill -0 359379 00:30:23.692 23:43:44 -- common/autotest_common.sh@931 -- # uname 00:30:23.692 23:43:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:23.692 23:43:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 359379 00:30:23.692 23:43:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:23.692 23:43:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:23.692 23:43:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 359379' 00:30:23.692 killing process with pid 359379 00:30:23.692 23:43:44 -- common/autotest_common.sh@945 -- # kill 359379 00:30:23.692 23:43:44 -- common/autotest_common.sh@950 -- # wait 359379 00:30:23.950 23:43:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:23.950 23:43:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:23.950 23:43:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:23.950 23:43:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:23.950 23:43:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:23.950 23:43:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.950 23:43:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:23.950 23:43:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.881 23:43:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:25.881 00:30:25.881 real 0m42.840s 00:30:25.881 user 2m46.764s 00:30:25.881 sys 0m8.004s 00:30:25.881 23:43:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:25.881 23:43:46 -- common/autotest_common.sh@10 -- # set +x 00:30:25.881 ************************************ 00:30:25.881 END TEST nvmf_fio_host 00:30:25.881 ************************************ 00:30:25.881 23:43:46 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:25.881 23:43:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:25.881 23:43:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:25.881 23:43:46 -- common/autotest_common.sh@10 -- # set +x 00:30:25.881 ************************************ 00:30:25.881 START TEST nvmf_failover 00:30:25.881 ************************************ 00:30:25.881 23:43:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:25.881 * Looking for test storage... 00:30:25.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:25.881 23:43:46 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.881 23:43:46 -- nvmf/common.sh@7 -- # uname -s 00:30:25.881 23:43:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.881 23:43:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.881 23:43:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.881 23:43:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.881 23:43:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.881 23:43:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.881 23:43:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.881 23:43:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.881 23:43:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.881 23:43:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.881 23:43:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:25.881 23:43:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:25.881 23:43:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.881 23:43:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.881 23:43:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:25.881 23:43:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.881 23:43:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.881 23:43:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.881 23:43:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.881 23:43:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.881 23:43:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.881 23:43:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.881 23:43:46 -- paths/export.sh@5 -- # export PATH 00:30:25.882 23:43:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.882 23:43:46 -- nvmf/common.sh@46 -- # : 0 00:30:25.882 23:43:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:25.882 23:43:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:25.882 23:43:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:25.882 23:43:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.882 23:43:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.882 23:43:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:25.882 23:43:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:25.882 23:43:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:26.141 23:43:46 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:26.141 23:43:46 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:26.141 23:43:46 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:26.141 23:43:46 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:26.141 23:43:46 -- host/failover.sh@18 -- # nvmftestinit 00:30:26.141 23:43:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:26.141 23:43:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.141 23:43:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:26.141 23:43:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:26.141 23:43:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:26.141 23:43:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.141 23:43:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.141 23:43:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.141 23:43:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:26.141 23:43:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:26.141 23:43:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:26.141 23:43:46 -- common/autotest_common.sh@10 -- # set +x 00:30:28.674 23:43:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:28.674 23:43:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:28.674 23:43:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:28.674 23:43:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:28.674 23:43:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:28.674 23:43:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:28.674 23:43:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:28.674 23:43:49 -- nvmf/common.sh@294 -- # net_devs=() 00:30:28.674 23:43:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:28.674 23:43:49 -- nvmf/common.sh@295 -- # e810=() 00:30:28.674 23:43:49 -- nvmf/common.sh@295 -- # local -ga e810 00:30:28.674 23:43:49 -- nvmf/common.sh@296 -- # x722=() 00:30:28.674 23:43:49 -- nvmf/common.sh@296 -- # local -ga x722 00:30:28.674 23:43:49 -- nvmf/common.sh@297 -- # mlx=() 00:30:28.674 23:43:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:28.674 23:43:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.674 23:43:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.674 23:43:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.674 23:43:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.674 23:43:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.674 23:43:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.674 23:43:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.674 23:43:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.674 23:43:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.674 23:43:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.674 23:43:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.674 23:43:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:28.674 23:43:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:28.674 23:43:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:28.674 23:43:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:28.674 23:43:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:28.674 23:43:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:28.674 23:43:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:28.674 23:43:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:28.674 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:28.674 23:43:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:28.674 23:43:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:28.674 23:43:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.674 23:43:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.674 23:43:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:28.674 23:43:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:28.674 23:43:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:28.674 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:28.674 23:43:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:28.674 23:43:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:28.675 23:43:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.675 23:43:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.675 23:43:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:28.675 23:43:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:28.675 23:43:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:28.675 23:43:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:28.675 23:43:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:28.675 23:43:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.675 23:43:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:28.675 23:43:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.675 23:43:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:28.675 Found net devices under 0000:84:00.0: cvl_0_0 00:30:28.675 23:43:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.675 23:43:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:28.675 23:43:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.675 23:43:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:28.675 23:43:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.675 23:43:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:28.675 Found net devices under 0000:84:00.1: cvl_0_1 00:30:28.675 23:43:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.675 23:43:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:28.675 23:43:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:28.675 23:43:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:28.675 23:43:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:28.675 23:43:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:28.675 23:43:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.675 23:43:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.675 23:43:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.675 23:43:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:28.675 23:43:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.675 23:43:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.675 23:43:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:28.675 23:43:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.675 23:43:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.675 23:43:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:28.675 23:43:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:28.675 23:43:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.675 23:43:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.932 23:43:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.933 23:43:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.933 23:43:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:28.933 23:43:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.933 23:43:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.933 23:43:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.933 23:43:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:28.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:30:28.933 00:30:28.933 --- 10.0.0.2 ping statistics --- 00:30:28.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.933 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:28.933 23:43:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:30:28.933 00:30:28.933 --- 10.0.0.1 ping statistics --- 00:30:28.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.933 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:30:28.933 23:43:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.933 23:43:49 -- nvmf/common.sh@410 -- # return 0 00:30:28.933 23:43:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:28.933 23:43:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.933 23:43:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:28.933 23:43:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:28.933 23:43:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.933 23:43:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:28.933 23:43:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:28.933 23:43:49 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:28.933 23:43:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:28.933 23:43:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:28.933 23:43:49 -- common/autotest_common.sh@10 -- # set +x 00:30:28.933 23:43:49 -- nvmf/common.sh@469 -- # nvmfpid=366171 00:30:28.933 23:43:49 -- nvmf/common.sh@470 -- # waitforlisten 366171 00:30:28.933 23:43:49 -- common/autotest_common.sh@819 -- # '[' -z 366171 ']' 00:30:28.933 23:43:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.933 23:43:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:28.933 23:43:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:28.933 23:43:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.933 23:43:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:28.933 23:43:49 -- common/autotest_common.sh@10 -- # set +x 00:30:28.933 [2024-07-11 23:43:49.818796] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:28.933 [2024-07-11 23:43:49.818902] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.933 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.190 [2024-07-11 23:43:49.904775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:29.190 [2024-07-11 23:43:50.008615] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:29.190 [2024-07-11 23:43:50.008788] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:29.190 [2024-07-11 23:43:50.008808] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:29.190 [2024-07-11 23:43:50.008832] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:29.190 [2024-07-11 23:43:50.008985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:29.192 [2024-07-11 23:43:50.009030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:29.192 [2024-07-11 23:43:50.009033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.138 23:43:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:30.138 23:43:50 -- common/autotest_common.sh@852 -- # return 0 00:30:30.138 23:43:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:30.138 23:43:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:30.138 23:43:50 -- common/autotest_common.sh@10 -- # set +x 00:30:30.138 23:43:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.138 23:43:50 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:30.395 [2024-07-11 23:43:51.336034] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:30.651 23:43:51 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:30.909 Malloc0 00:30:30.909 23:43:51 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:31.475 23:43:52 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:31.733 23:43:52 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.990 [2024-07-11 23:43:52.778348] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.990 23:43:52 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:32.249 [2024-07-11 23:43:53.099447] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:32.249 23:43:53 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:32.817 [2024-07-11 23:43:53.564978] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:32.817 23:43:53 -- host/failover.sh@31 -- # bdevperf_pid=366696 00:30:32.817 23:43:53 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:32.817 23:43:53 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:32.818 23:43:53 -- host/failover.sh@34 -- # waitforlisten 366696 /var/tmp/bdevperf.sock 00:30:32.818 23:43:53 -- common/autotest_common.sh@819 -- # '[' -z 366696 ']' 00:30:32.818 23:43:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:32.818 23:43:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:32.818 23:43:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:32.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:32.818 23:43:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:32.818 23:43:53 -- common/autotest_common.sh@10 -- # set +x 00:30:33.075 23:43:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:33.075 23:43:53 -- common/autotest_common.sh@852 -- # return 0 00:30:33.075 23:43:53 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:33.639 NVMe0n1 00:30:33.639 23:43:54 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:34.204 00:30:34.461 23:43:55 -- host/failover.sh@39 -- # run_test_pid=366840 00:30:34.461 23:43:55 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:34.461 23:43:55 -- host/failover.sh@41 -- # sleep 1 00:30:35.395 23:43:56 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.653 [2024-07-11 23:43:56.545235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 [2024-07-11 23:43:56.545950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100af50 is same with the state(5) to be set 00:30:35.653 23:43:56 -- host/failover.sh@45 -- # sleep 3 00:30:38.936 23:43:59 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:39.197 00:30:39.197 23:44:00 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:39.763 [2024-07-11 23:44:00.453974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 [2024-07-11 23:44:00.454564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b760 is same with the state(5) to be set 00:30:39.764 23:44:00 -- host/failover.sh@50 -- # sleep 3 00:30:43.041 23:44:03 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.041 [2024-07-11 23:44:03.804509] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.041 23:44:03 -- host/failover.sh@55 -- # sleep 1 00:30:43.980 23:44:04 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:44.239 [2024-07-11 23:44:05.091236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.091991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.092003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.092015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.092026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.092038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.092050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.092064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.092077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.092089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.092101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.092113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.092149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 [2024-07-11 23:44:05.092164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4ee0 is same with the state(5) to be set 00:30:44.239 23:44:05 -- host/failover.sh@59 -- # wait 366840 00:30:49.540 0 00:30:49.540 23:44:10 -- host/failover.sh@61 -- # killprocess 366696 00:30:49.540 23:44:10 -- common/autotest_common.sh@926 -- # '[' -z 366696 ']' 00:30:49.540 23:44:10 -- common/autotest_common.sh@930 -- # kill -0 366696 00:30:49.540 23:44:10 -- common/autotest_common.sh@931 -- # uname 00:30:49.540 23:44:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:49.540 23:44:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 366696 00:30:49.803 23:44:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:49.803 23:44:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:49.803 23:44:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 366696' 00:30:49.803 killing process with pid 366696 00:30:49.803 23:44:10 -- common/autotest_common.sh@945 -- # kill 366696 00:30:49.803 23:44:10 -- common/autotest_common.sh@950 -- # wait 366696 00:30:49.803 23:44:10 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:49.803 [2024-07-11 23:43:53.635062] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:49.803 [2024-07-11 23:43:53.635251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366696 ] 00:30:49.803 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.803 [2024-07-11 23:43:53.707642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.803 [2024-07-11 23:43:53.793459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.803 Running I/O for 15 seconds... 00:30:49.803 [2024-07-11 23:43:56.546460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.803 [2024-07-11 23:43:56.546976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.803 [2024-07-11 23:43:56.546990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.804 [2024-07-11 23:43:56.547669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.804 [2024-07-11 23:43:56.547729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.804 [2024-07-11 23:43:56.547758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.804 [2024-07-11 23:43:56.547788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.804 [2024-07-11 23:43:56.547818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.804 [2024-07-11 23:43:56.547847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.804 [2024-07-11 23:43:56.547906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.547980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.547994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.548008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.548026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.548042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.548056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.548071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.548085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.548099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.548114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.548151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.548167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.548183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.804 [2024-07-11 23:43:56.548197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.548212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.548226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.548241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.804 [2024-07-11 23:43:56.548256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.548271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.804 [2024-07-11 23:43:56.548285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.548301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.804 [2024-07-11 23:43:56.548315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.804 [2024-07-11 23:43:56.548331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.805 [2024-07-11 23:43:56.548571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.805 [2024-07-11 23:43:56.548804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.805 [2024-07-11 23:43:56.548834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.805 [2024-07-11 23:43:56.548862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.805 [2024-07-11 23:43:56.548890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.548948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.805 [2024-07-11 23:43:56.548977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.548993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.805 [2024-07-11 23:43:56.549302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.805 [2024-07-11 23:43:56.549331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.805 [2024-07-11 23:43:56.549422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.805 [2024-07-11 23:43:56.549466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.805 [2024-07-11 23:43:56.549524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.805 [2024-07-11 23:43:56.549554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.805 [2024-07-11 23:43:56.549631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.805 [2024-07-11 23:43:56.549644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.549659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.806 [2024-07-11 23:43:56.549672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.549687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.806 [2024-07-11 23:43:56.549701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.549716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.806 [2024-07-11 23:43:56.549730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.549744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.549759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.549774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.549788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.549803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.549816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.549831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.549845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.549860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.549874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.549889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.549903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.549918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.549931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.549946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.549963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.549979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.806 [2024-07-11 23:43:56.549993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.806 [2024-07-11 23:43:56.550024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.806 [2024-07-11 23:43:56.550054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.806 [2024-07-11 23:43:56.550085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.550115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.806 [2024-07-11 23:43:56.550170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.806 [2024-07-11 23:43:56.550204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.806 [2024-07-11 23:43:56.550235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.550265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.550295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.550324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.550353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.550386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.550415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:43:56.550444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2b70 is same with the state(5) to be set 00:30:49.806 [2024-07-11 23:43:56.550492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.806 [2024-07-11 23:43:56.550503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.806 [2024-07-11 23:43:56.550517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121600 len:8 PRP1 0x0 PRP2 0x0 00:30:49.806 [2024-07-11 23:43:56.550530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550596] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1df2b70 was disconnected and freed. reset controller. 00:30:49.806 [2024-07-11 23:43:56.550623] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:49.806 [2024-07-11 23:43:56.550663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.806 [2024-07-11 23:43:56.550681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.806 [2024-07-11 23:43:56.550719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.806 [2024-07-11 23:43:56.550744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.806 [2024-07-11 23:43:56.550770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:43:56.550783] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.806 [2024-07-11 23:43:56.552894] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.806 [2024-07-11 23:43:56.552934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd3fd0 (9): Bad file descriptor 00:30:49.806 [2024-07-11 23:43:56.579719] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:49.806 [2024-07-11 23:44:00.454767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:44:00.454813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:44:00.454844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:44:00.454862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:44:00.454878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:44:00.454892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:44:00.454907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:44:00.454920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:44:00.454936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:44:00.454950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:44:00.454964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:44:00.454978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:44:00.454993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:44:00.455006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.806 [2024-07-11 23:44:00.455020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.806 [2024-07-11 23:44:00.455035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.807 [2024-07-11 23:44:00.455528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.807 [2024-07-11 23:44:00.455586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.807 [2024-07-11 23:44:00.455615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.807 [2024-07-11 23:44:00.455645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.807 [2024-07-11 23:44:00.455677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.807 [2024-07-11 23:44:00.455736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.807 [2024-07-11 23:44:00.455765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.807 [2024-07-11 23:44:00.455795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.807 [2024-07-11 23:44:00.455825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.807 [2024-07-11 23:44:00.455865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.807 [2024-07-11 23:44:00.455882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.809 [2024-07-11 23:44:00.455896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.455911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.809 [2024-07-11 23:44:00.455925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.455941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.809 [2024-07-11 23:44:00.455955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.455970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.455983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.455999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.809 [2024-07-11 23:44:00.456075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.809 [2024-07-11 23:44:00.456104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.809 [2024-07-11 23:44:00.456167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.809 [2024-07-11 23:44:00.456467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.809 [2024-07-11 23:44:00.456501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.809 [2024-07-11 23:44:00.456587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.809 [2024-07-11 23:44:00.456616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.809 [2024-07-11 23:44:00.456908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.456963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.456978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.809 [2024-07-11 23:44:00.456992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.457007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.457020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.457036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.457050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.457064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.809 [2024-07-11 23:44:00.457077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.457093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.457107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.457136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.809 [2024-07-11 23:44:00.457161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.809 [2024-07-11 23:44:00.457179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.810 [2024-07-11 23:44:00.457194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.810 [2024-07-11 23:44:00.457287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.810 [2024-07-11 23:44:00.457568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.810 [2024-07-11 23:44:00.457598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.810 [2024-07-11 23:44:00.457658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.810 [2024-07-11 23:44:00.457687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.810 [2024-07-11 23:44:00.457715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.810 [2024-07-11 23:44:00.457799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.810 [2024-07-11 23:44:00.457876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.810 [2024-07-11 23:44:00.457907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.810 [2024-07-11 23:44:00.457937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.457982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.457997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.810 [2024-07-11 23:44:00.458027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.458062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.458092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.458128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.458170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.458200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.458229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.458259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.458289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.458319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.458349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.810 [2024-07-11 23:44:00.458379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.458409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.458442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.810 [2024-07-11 23:44:00.458487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.810 [2024-07-11 23:44:00.458516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.810 [2024-07-11 23:44:00.458530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.811 [2024-07-11 23:44:00.458544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.458559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.811 [2024-07-11 23:44:00.458573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.458587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.811 [2024-07-11 23:44:00.458601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.458615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:00.458634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.458650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:00.458664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.458678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:00.458692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.458706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:00.458720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.458735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:00.458748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.458763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:00.458776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.458791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:00.458804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.458822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00740 is same with the state(5) to be set 00:30:49.811 [2024-07-11 23:44:00.458839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.811 [2024-07-11 23:44:00.458851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.811 [2024-07-11 23:44:00.458862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19824 len:8 PRP1 0x0 PRP2 0x0 00:30:49.811 [2024-07-11 23:44:00.458875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.458935] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e00740 was disconnected and freed. reset controller. 00:30:49.811 [2024-07-11 23:44:00.458953] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:49.811 [2024-07-11 23:44:00.458987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.811 [2024-07-11 23:44:00.459005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.459019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.811 [2024-07-11 23:44:00.459032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.459045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.811 [2024-07-11 23:44:00.459058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.459071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.811 [2024-07-11 23:44:00.459083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:00.459102] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.811 [2024-07-11 23:44:00.459176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd3fd0 (9): Bad file descriptor 00:30:49.811 [2024-07-11 23:44:00.461365] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.811 [2024-07-11 23:44:00.532281] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:49.811 [2024-07-11 23:44:05.091157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.811 [2024-07-11 23:44:05.091226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.091245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.811 [2024-07-11 23:44:05.091259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.091273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.811 [2024-07-11 23:44:05.091286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.091301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.811 [2024-07-11 23:44:05.091314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.091337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd3fd0 is same with the state(5) to be set 00:30:49.811 [2024-07-11 23:44:05.092352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.811 [2024-07-11 23:44:05.092958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.811 [2024-07-11 23:44:05.092973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.092987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.812 [2024-07-11 23:44:05.093898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.093959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.093996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.812 [2024-07-11 23:44:05.094011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.094026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.094040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.094056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.094070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.094085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.812 [2024-07-11 23:44:05.094098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.094113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.094152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.094170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.094185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.094201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.094215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.094230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.094244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.094259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.094274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.094289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.094303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.094318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.812 [2024-07-11 23:44:05.094332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.812 [2024-07-11 23:44:05.094347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.813 [2024-07-11 23:44:05.094676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.813 [2024-07-11 23:44:05.094766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.813 [2024-07-11 23:44:05.094794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.094955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.094970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.813 [2024-07-11 23:44:05.094984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.813 [2024-07-11 23:44:05.095014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.095043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.813 [2024-07-11 23:44:05.095073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.095102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.095131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.095173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.095205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.813 [2024-07-11 23:44:05.095235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.095265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.813 [2024-07-11 23:44:05.095294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.095323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.813 [2024-07-11 23:44:05.095353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.813 [2024-07-11 23:44:05.095385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.095416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.813 [2024-07-11 23:44:05.095457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.813 [2024-07-11 23:44:05.095473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.095488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.095519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.095553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.095584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.095614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.095645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.095675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.814 [2024-07-11 23:44:05.095706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.095737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.814 [2024-07-11 23:44:05.095768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.095800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.095830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.814 [2024-07-11 23:44:05.095861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.814 [2024-07-11 23:44:05.095903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.814 [2024-07-11 23:44:05.095934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.814 [2024-07-11 23:44:05.095967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.095983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.814 [2024-07-11 23:44:05.095997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.096012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.096027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.096042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.814 [2024-07-11 23:44:05.096057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.096073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.814 [2024-07-11 23:44:05.096087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.096102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.814 [2024-07-11 23:44:05.096116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.096131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:49.814 [2024-07-11 23:44:05.096155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.096172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.096192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.096207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.096221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.096237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.096251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.096266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.096280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.096295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.096309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.096324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.096338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.096357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.814 [2024-07-11 23:44:05.096372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.096387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de04f0 is same with the state(5) to be set 00:30:49.814 [2024-07-11 23:44:05.096414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:49.814 [2024-07-11 23:44:05.096426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:49.814 [2024-07-11 23:44:05.096445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16272 len:8 PRP1 0x0 PRP2 0x0 00:30:49.814 [2024-07-11 23:44:05.096458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.814 [2024-07-11 23:44:05.096525] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1de04f0 was disconnected and freed. reset controller. 00:30:49.814 [2024-07-11 23:44:05.096543] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:49.814 [2024-07-11 23:44:05.096560] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.814 [2024-07-11 23:44:05.098869] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.814 [2024-07-11 23:44:05.098908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd3fd0 (9): Bad file descriptor 00:30:49.814 [2024-07-11 23:44:05.208846] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:49.814 00:30:49.814 Latency(us) 00:30:49.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.814 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:49.814 Verification LBA range: start 0x0 length 0x4000 00:30:49.814 NVMe0n1 : 15.01 13109.50 51.21 817.56 0.00 9174.49 958.77 16117.00 00:30:49.814 =================================================================================================================== 00:30:49.814 Total : 13109.50 51.21 817.56 0.00 9174.49 958.77 16117.00 00:30:49.814 Received shutdown signal, test time was about 15.000000 seconds 00:30:49.814 00:30:49.814 Latency(us) 00:30:49.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.814 =================================================================================================================== 00:30:49.814 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:49.814 23:44:10 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:49.814 23:44:10 -- host/failover.sh@65 -- # count=3 00:30:49.814 23:44:10 -- host/failover.sh@67 -- # (( count != 3 )) 00:30:49.814 23:44:10 -- host/failover.sh@73 -- # bdevperf_pid=368709 00:30:49.814 23:44:10 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:49.814 23:44:10 -- host/failover.sh@75 -- # waitforlisten 368709 /var/tmp/bdevperf.sock 00:30:49.814 23:44:10 -- common/autotest_common.sh@819 -- # '[' -z 368709 ']' 00:30:49.814 23:44:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:49.814 23:44:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:49.814 23:44:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:49.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:49.814 23:44:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:49.814 23:44:10 -- common/autotest_common.sh@10 -- # set +x 00:30:50.379 23:44:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:50.379 23:44:11 -- common/autotest_common.sh@852 -- # return 0 00:30:50.379 23:44:11 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:50.945 [2024-07-11 23:44:11.793234] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:50.945 23:44:11 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:51.203 [2024-07-11 23:44:12.078015] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:51.203 23:44:12 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:51.769 NVMe0n1 00:30:51.769 23:44:12 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:52.334 00:30:52.334 23:44:13 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:52.900 00:30:52.900 23:44:13 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:52.900 23:44:13 -- host/failover.sh@82 -- # grep -q NVMe0 00:30:53.158 23:44:13 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:53.416 23:44:14 -- host/failover.sh@87 -- # sleep 3 00:30:56.698 23:44:17 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:56.698 23:44:17 -- host/failover.sh@88 -- # grep -q NVMe0 00:30:56.956 23:44:17 -- host/failover.sh@90 -- # run_test_pid=369561 00:30:56.956 23:44:17 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:56.956 23:44:17 -- host/failover.sh@92 -- # wait 369561 00:30:58.331 0 00:30:58.331 23:44:18 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:58.331 [2024-07-11 23:44:10.803169] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:58.331 [2024-07-11 23:44:10.803300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid368709 ] 00:30:58.331 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.331 [2024-07-11 23:44:10.910825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.331 [2024-07-11 23:44:10.997245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.331 [2024-07-11 23:44:14.204082] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:58.331 [2024-07-11 23:44:14.204173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.331 [2024-07-11 23:44:14.204209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.331 [2024-07-11 23:44:14.204225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.331 [2024-07-11 23:44:14.204239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.331 [2024-07-11 23:44:14.204253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.331 [2024-07-11 23:44:14.204266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.331 [2024-07-11 23:44:14.204280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.331 [2024-07-11 23:44:14.204294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.331 [2024-07-11 23:44:14.204308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.331 [2024-07-11 23:44:14.204346] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.331 [2024-07-11 23:44:14.204377] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190bfd0 (9): Bad file descriptor 00:30:58.331 [2024-07-11 23:44:14.215503] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:58.331 Running I/O for 1 seconds... 00:30:58.331 00:30:58.331 Latency(us) 00:30:58.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.331 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:58.331 Verification LBA range: start 0x0 length 0x4000 00:30:58.331 NVMe0n1 : 1.01 13272.28 51.84 0.00 0.00 9602.99 952.70 17476.27 00:30:58.331 =================================================================================================================== 00:30:58.331 Total : 13272.28 51.84 0.00 0.00 9602.99 952.70 17476.27 00:30:58.331 23:44:18 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:58.331 23:44:18 -- host/failover.sh@95 -- # grep -q NVMe0 00:30:58.331 23:44:19 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:58.591 23:44:19 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:58.591 23:44:19 -- host/failover.sh@99 -- # grep -q NVMe0 00:30:59.159 23:44:19 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:59.418 23:44:20 -- host/failover.sh@101 -- # sleep 3 00:31:02.727 23:44:23 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:02.727 23:44:23 -- host/failover.sh@103 -- # grep -q NVMe0 00:31:02.727 23:44:23 -- host/failover.sh@108 -- # killprocess 368709 00:31:02.727 23:44:23 -- common/autotest_common.sh@926 -- # '[' -z 368709 ']' 00:31:02.727 23:44:23 -- common/autotest_common.sh@930 -- # kill -0 368709 00:31:02.727 23:44:23 -- common/autotest_common.sh@931 -- # uname 00:31:02.727 23:44:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:02.727 23:44:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 368709 00:31:02.727 23:44:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:02.727 23:44:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:02.727 23:44:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 368709' 00:31:02.727 killing process with pid 368709 00:31:02.727 23:44:23 -- common/autotest_common.sh@945 -- # kill 368709 00:31:02.727 23:44:23 -- common/autotest_common.sh@950 -- # wait 368709 00:31:02.984 23:44:23 -- host/failover.sh@110 -- # sync 00:31:02.984 23:44:23 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.551 23:44:24 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:03.551 23:44:24 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:03.551 23:44:24 -- host/failover.sh@116 -- # nvmftestfini 00:31:03.551 23:44:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:03.551 23:44:24 -- nvmf/common.sh@116 -- # sync 00:31:03.551 23:44:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:03.551 23:44:24 -- nvmf/common.sh@119 -- # set +e 00:31:03.551 23:44:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:03.551 23:44:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:03.551 rmmod nvme_tcp 00:31:03.551 rmmod nvme_fabrics 00:31:03.551 rmmod nvme_keyring 00:31:03.551 23:44:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:03.551 23:44:24 -- nvmf/common.sh@123 -- # set -e 00:31:03.551 23:44:24 -- nvmf/common.sh@124 -- # return 0 00:31:03.551 23:44:24 -- nvmf/common.sh@477 -- # '[' -n 366171 ']' 00:31:03.551 23:44:24 -- nvmf/common.sh@478 -- # killprocess 366171 00:31:03.551 23:44:24 -- common/autotest_common.sh@926 -- # '[' -z 366171 ']' 00:31:03.551 23:44:24 -- common/autotest_common.sh@930 -- # kill -0 366171 00:31:03.551 23:44:24 -- common/autotest_common.sh@931 -- # uname 00:31:03.551 23:44:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:03.551 23:44:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 366171 00:31:03.551 23:44:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:03.551 23:44:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:03.551 23:44:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 366171' 00:31:03.551 killing process with pid 366171 00:31:03.551 23:44:24 -- common/autotest_common.sh@945 -- # kill 366171 00:31:03.551 23:44:24 -- common/autotest_common.sh@950 -- # wait 366171 00:31:03.809 23:44:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:03.810 23:44:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:03.810 23:44:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:03.810 23:44:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:03.810 23:44:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:03.810 23:44:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.810 23:44:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:03.810 23:44:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.710 23:44:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:05.710 00:31:05.710 real 0m39.889s 00:31:05.710 user 2m20.959s 00:31:05.710 sys 0m7.631s 00:31:05.710 23:44:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:05.710 23:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:05.710 ************************************ 00:31:05.710 END TEST nvmf_failover 00:31:05.710 ************************************ 00:31:05.969 23:44:26 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:05.969 23:44:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:05.969 23:44:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:05.969 23:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:05.969 ************************************ 00:31:05.969 START TEST nvmf_discovery 00:31:05.969 ************************************ 00:31:05.969 23:44:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:05.969 * Looking for test storage... 00:31:05.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:05.969 23:44:26 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.969 23:44:26 -- nvmf/common.sh@7 -- # uname -s 00:31:05.969 23:44:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.969 23:44:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.969 23:44:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.969 23:44:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.969 23:44:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.969 23:44:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.969 23:44:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.969 23:44:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.969 23:44:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.969 23:44:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.969 23:44:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:05.969 23:44:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:05.969 23:44:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.969 23:44:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.969 23:44:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.969 23:44:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.969 23:44:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.969 23:44:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.969 23:44:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.969 23:44:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.969 23:44:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.969 23:44:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.969 23:44:26 -- paths/export.sh@5 -- # export PATH 00:31:05.969 23:44:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.969 23:44:26 -- nvmf/common.sh@46 -- # : 0 00:31:05.969 23:44:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:05.969 23:44:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:05.969 23:44:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:05.969 23:44:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.969 23:44:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.969 23:44:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:05.969 23:44:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:05.969 23:44:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:05.969 23:44:26 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:05.969 23:44:26 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:05.969 23:44:26 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:05.969 23:44:26 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:05.969 23:44:26 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:05.969 23:44:26 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:05.969 23:44:26 -- host/discovery.sh@25 -- # nvmftestinit 00:31:05.969 23:44:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:05.969 23:44:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.969 23:44:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:05.969 23:44:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:05.969 23:44:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:05.969 23:44:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.969 23:44:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:05.969 23:44:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.969 23:44:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:05.969 23:44:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:05.969 23:44:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:05.969 23:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:09.259 23:44:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:09.259 23:44:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:09.259 23:44:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:09.259 23:44:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:09.259 23:44:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:09.259 23:44:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:09.259 23:44:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:09.259 23:44:29 -- nvmf/common.sh@294 -- # net_devs=() 00:31:09.259 23:44:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:09.259 23:44:29 -- nvmf/common.sh@295 -- # e810=() 00:31:09.259 23:44:29 -- nvmf/common.sh@295 -- # local -ga e810 00:31:09.259 23:44:29 -- nvmf/common.sh@296 -- # x722=() 00:31:09.259 23:44:29 -- nvmf/common.sh@296 -- # local -ga x722 00:31:09.259 23:44:29 -- nvmf/common.sh@297 -- # mlx=() 00:31:09.259 23:44:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:09.259 23:44:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:09.259 23:44:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:09.259 23:44:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:09.259 23:44:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:09.259 23:44:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:09.259 23:44:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:09.259 23:44:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:09.259 23:44:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:09.259 23:44:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:09.259 23:44:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:09.259 23:44:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:09.259 23:44:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:09.259 23:44:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:09.259 23:44:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:09.259 23:44:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:09.259 23:44:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:09.259 23:44:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:09.259 23:44:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:09.259 23:44:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:09.259 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:09.259 23:44:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:09.259 23:44:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:09.259 23:44:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.259 23:44:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.259 23:44:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:09.260 23:44:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:09.260 23:44:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:09.260 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:09.260 23:44:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:09.260 23:44:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:09.260 23:44:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.260 23:44:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.260 23:44:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:09.260 23:44:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:09.260 23:44:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:09.260 23:44:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:09.260 23:44:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:09.260 23:44:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.260 23:44:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:09.260 23:44:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.260 23:44:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:09.260 Found net devices under 0000:84:00.0: cvl_0_0 00:31:09.260 23:44:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.260 23:44:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:09.260 23:44:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.260 23:44:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:09.260 23:44:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.260 23:44:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:09.260 Found net devices under 0000:84:00.1: cvl_0_1 00:31:09.260 23:44:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.260 23:44:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:09.260 23:44:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:09.260 23:44:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:09.260 23:44:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:09.260 23:44:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:09.260 23:44:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:09.260 23:44:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:09.260 23:44:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:09.260 23:44:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:09.260 23:44:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:09.260 23:44:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:09.260 23:44:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:09.260 23:44:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:09.260 23:44:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:09.260 23:44:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:09.260 23:44:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:09.260 23:44:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:09.260 23:44:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:09.260 23:44:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:09.260 23:44:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:09.260 23:44:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:09.260 23:44:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:09.260 23:44:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:09.260 23:44:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:09.260 23:44:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:09.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:09.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:31:09.260 00:31:09.260 --- 10.0.0.2 ping statistics --- 00:31:09.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.260 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:31:09.260 23:44:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:09.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:09.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:31:09.260 00:31:09.260 --- 10.0.0.1 ping statistics --- 00:31:09.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.260 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:31:09.260 23:44:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:09.260 23:44:29 -- nvmf/common.sh@410 -- # return 0 00:31:09.260 23:44:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:09.260 23:44:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:09.260 23:44:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:09.260 23:44:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:09.260 23:44:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:09.260 23:44:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:09.260 23:44:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:09.260 23:44:29 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:09.260 23:44:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:09.260 23:44:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:09.260 23:44:29 -- common/autotest_common.sh@10 -- # set +x 00:31:09.260 23:44:29 -- nvmf/common.sh@469 -- # nvmfpid=372340 00:31:09.260 23:44:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:09.260 23:44:29 -- nvmf/common.sh@470 -- # waitforlisten 372340 00:31:09.260 23:44:29 -- common/autotest_common.sh@819 -- # '[' -z 372340 ']' 00:31:09.260 23:44:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.260 23:44:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:09.260 23:44:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.260 23:44:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:09.260 23:44:29 -- common/autotest_common.sh@10 -- # set +x 00:31:09.260 [2024-07-11 23:44:29.823862] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:09.260 [2024-07-11 23:44:29.824029] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.260 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.260 [2024-07-11 23:44:29.960270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.260 [2024-07-11 23:44:30.071254] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:09.260 [2024-07-11 23:44:30.071426] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.260 [2024-07-11 23:44:30.071447] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.260 [2024-07-11 23:44:30.071463] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.260 [2024-07-11 23:44:30.071493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.194 23:44:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:10.194 23:44:31 -- common/autotest_common.sh@852 -- # return 0 00:31:10.194 23:44:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:10.194 23:44:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:10.194 23:44:31 -- common/autotest_common.sh@10 -- # set +x 00:31:10.194 23:44:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.194 23:44:31 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:10.194 23:44:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.194 23:44:31 -- common/autotest_common.sh@10 -- # set +x 00:31:10.194 [2024-07-11 23:44:31.050067] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.194 23:44:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.194 23:44:31 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:10.194 23:44:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.194 23:44:31 -- common/autotest_common.sh@10 -- # set +x 00:31:10.194 [2024-07-11 23:44:31.058293] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:10.194 23:44:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.194 23:44:31 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:10.194 23:44:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.194 23:44:31 -- common/autotest_common.sh@10 -- # set +x 00:31:10.194 null0 00:31:10.194 23:44:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.194 23:44:31 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:10.194 23:44:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.194 23:44:31 -- common/autotest_common.sh@10 -- # set +x 00:31:10.194 null1 00:31:10.194 23:44:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.194 23:44:31 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:10.194 23:44:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.194 23:44:31 -- common/autotest_common.sh@10 -- # set +x 00:31:10.194 23:44:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.194 23:44:31 -- host/discovery.sh@45 -- # hostpid=372493 00:31:10.194 23:44:31 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:10.194 23:44:31 -- host/discovery.sh@46 -- # waitforlisten 372493 /tmp/host.sock 00:31:10.194 23:44:31 -- common/autotest_common.sh@819 -- # '[' -z 372493 ']' 00:31:10.194 23:44:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:10.194 23:44:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:10.194 23:44:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:10.194 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:10.194 23:44:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:10.194 23:44:31 -- common/autotest_common.sh@10 -- # set +x 00:31:10.194 [2024-07-11 23:44:31.130975] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:10.194 [2024-07-11 23:44:31.131052] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid372493 ] 00:31:10.452 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.452 [2024-07-11 23:44:31.199227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.452 [2024-07-11 23:44:31.290044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:10.452 [2024-07-11 23:44:31.290235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.465 23:44:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:11.465 23:44:32 -- common/autotest_common.sh@852 -- # return 0 00:31:11.465 23:44:32 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:11.465 23:44:32 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:11.465 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.465 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.465 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.465 23:44:32 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:11.465 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.465 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.465 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.465 23:44:32 -- host/discovery.sh@72 -- # notify_id=0 00:31:11.465 23:44:32 -- host/discovery.sh@78 -- # get_subsystem_names 00:31:11.465 23:44:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.465 23:44:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.465 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.465 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.465 23:44:32 -- host/discovery.sh@59 -- # sort 00:31:11.465 23:44:32 -- host/discovery.sh@59 -- # xargs 00:31:11.465 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.465 23:44:32 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:31:11.465 23:44:32 -- host/discovery.sh@79 -- # get_bdev_list 00:31:11.465 23:44:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.465 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.465 23:44:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.465 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.465 23:44:32 -- host/discovery.sh@55 -- # sort 00:31:11.465 23:44:32 -- host/discovery.sh@55 -- # xargs 00:31:11.465 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.465 23:44:32 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:31:11.465 23:44:32 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:11.465 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.465 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.465 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.465 23:44:32 -- host/discovery.sh@82 -- # get_subsystem_names 00:31:11.466 23:44:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.466 23:44:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.466 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.466 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.466 23:44:32 -- host/discovery.sh@59 -- # xargs 00:31:11.466 23:44:32 -- host/discovery.sh@59 -- # sort 00:31:11.466 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.466 23:44:32 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:31:11.466 23:44:32 -- host/discovery.sh@83 -- # get_bdev_list 00:31:11.466 23:44:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.466 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.466 23:44:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.466 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.466 23:44:32 -- host/discovery.sh@55 -- # sort 00:31:11.466 23:44:32 -- host/discovery.sh@55 -- # xargs 00:31:11.466 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.466 23:44:32 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:11.466 23:44:32 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:11.466 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.466 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.466 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.466 23:44:32 -- host/discovery.sh@86 -- # get_subsystem_names 00:31:11.466 23:44:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.466 23:44:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.466 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.466 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.466 23:44:32 -- host/discovery.sh@59 -- # sort 00:31:11.466 23:44:32 -- host/discovery.sh@59 -- # xargs 00:31:11.466 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.466 23:44:32 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:31:11.466 23:44:32 -- host/discovery.sh@87 -- # get_bdev_list 00:31:11.466 23:44:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.466 23:44:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.466 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.466 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.466 23:44:32 -- host/discovery.sh@55 -- # sort 00:31:11.466 23:44:32 -- host/discovery.sh@55 -- # xargs 00:31:11.466 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.466 23:44:32 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:11.466 23:44:32 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:11.466 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.466 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.466 [2024-07-11 23:44:32.397896] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.466 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.466 23:44:32 -- host/discovery.sh@92 -- # get_subsystem_names 00:31:11.466 23:44:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.466 23:44:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.466 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.466 23:44:32 -- host/discovery.sh@59 -- # sort 00:31:11.466 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.466 23:44:32 -- host/discovery.sh@59 -- # xargs 00:31:11.466 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.724 23:44:32 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:11.724 23:44:32 -- host/discovery.sh@93 -- # get_bdev_list 00:31:11.724 23:44:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.724 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.724 23:44:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.724 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.724 23:44:32 -- host/discovery.sh@55 -- # sort 00:31:11.724 23:44:32 -- host/discovery.sh@55 -- # xargs 00:31:11.724 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.724 23:44:32 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:31:11.724 23:44:32 -- host/discovery.sh@94 -- # get_notification_count 00:31:11.724 23:44:32 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:11.724 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.724 23:44:32 -- host/discovery.sh@74 -- # jq '. | length' 00:31:11.724 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.724 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.724 23:44:32 -- host/discovery.sh@74 -- # notification_count=0 00:31:11.724 23:44:32 -- host/discovery.sh@75 -- # notify_id=0 00:31:11.724 23:44:32 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:31:11.724 23:44:32 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:11.724 23:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.724 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:31:11.724 23:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.724 23:44:32 -- host/discovery.sh@100 -- # sleep 1 00:31:12.291 [2024-07-11 23:44:33.171349] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:12.291 [2024-07-11 23:44:33.171387] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:12.291 [2024-07-11 23:44:33.171415] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:12.549 [2024-07-11 23:44:33.257680] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:12.549 [2024-07-11 23:44:33.360659] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:12.549 [2024-07-11 23:44:33.360685] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:12.806 23:44:33 -- host/discovery.sh@101 -- # get_subsystem_names 00:31:12.806 23:44:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:12.806 23:44:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:12.806 23:44:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.806 23:44:33 -- common/autotest_common.sh@10 -- # set +x 00:31:12.806 23:44:33 -- host/discovery.sh@59 -- # sort 00:31:12.806 23:44:33 -- host/discovery.sh@59 -- # xargs 00:31:12.806 23:44:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.806 23:44:33 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.806 23:44:33 -- host/discovery.sh@102 -- # get_bdev_list 00:31:12.806 23:44:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.806 23:44:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:12.806 23:44:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.806 23:44:33 -- common/autotest_common.sh@10 -- # set +x 00:31:12.806 23:44:33 -- host/discovery.sh@55 -- # sort 00:31:12.806 23:44:33 -- host/discovery.sh@55 -- # xargs 00:31:12.806 23:44:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.806 23:44:33 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:12.806 23:44:33 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:31:12.806 23:44:33 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:12.806 23:44:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.806 23:44:33 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:12.806 23:44:33 -- common/autotest_common.sh@10 -- # set +x 00:31:12.806 23:44:33 -- host/discovery.sh@63 -- # sort -n 00:31:12.806 23:44:33 -- host/discovery.sh@63 -- # xargs 00:31:12.806 23:44:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.806 23:44:33 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:31:12.806 23:44:33 -- host/discovery.sh@104 -- # get_notification_count 00:31:12.806 23:44:33 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:12.806 23:44:33 -- host/discovery.sh@74 -- # jq '. | length' 00:31:12.806 23:44:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.806 23:44:33 -- common/autotest_common.sh@10 -- # set +x 00:31:12.806 23:44:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.063 23:44:33 -- host/discovery.sh@74 -- # notification_count=1 00:31:13.063 23:44:33 -- host/discovery.sh@75 -- # notify_id=1 00:31:13.063 23:44:33 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:31:13.063 23:44:33 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:13.063 23:44:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.063 23:44:33 -- common/autotest_common.sh@10 -- # set +x 00:31:13.063 23:44:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.063 23:44:33 -- host/discovery.sh@109 -- # sleep 1 00:31:13.997 23:44:34 -- host/discovery.sh@110 -- # get_bdev_list 00:31:13.997 23:44:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.997 23:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.997 23:44:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:13.997 23:44:34 -- common/autotest_common.sh@10 -- # set +x 00:31:13.997 23:44:34 -- host/discovery.sh@55 -- # sort 00:31:13.997 23:44:34 -- host/discovery.sh@55 -- # xargs 00:31:13.997 23:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.997 23:44:34 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:13.997 23:44:34 -- host/discovery.sh@111 -- # get_notification_count 00:31:13.997 23:44:34 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:13.997 23:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.997 23:44:34 -- host/discovery.sh@74 -- # jq '. | length' 00:31:13.997 23:44:34 -- common/autotest_common.sh@10 -- # set +x 00:31:13.997 23:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.255 23:44:34 -- host/discovery.sh@74 -- # notification_count=1 00:31:14.255 23:44:34 -- host/discovery.sh@75 -- # notify_id=2 00:31:14.255 23:44:34 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:31:14.255 23:44:34 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:14.255 23:44:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.255 23:44:34 -- common/autotest_common.sh@10 -- # set +x 00:31:14.255 [2024-07-11 23:44:34.961321] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:14.255 [2024-07-11 23:44:34.962218] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:14.255 [2024-07-11 23:44:34.962266] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:14.255 23:44:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.255 23:44:34 -- host/discovery.sh@117 -- # sleep 1 00:31:14.255 [2024-07-11 23:44:35.049492] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:14.512 [2024-07-11 23:44:35.310732] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:14.512 [2024-07-11 23:44:35.310760] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:14.512 [2024-07-11 23:44:35.310770] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:15.077 23:44:35 -- host/discovery.sh@118 -- # get_subsystem_names 00:31:15.077 23:44:35 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:15.077 23:44:35 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:15.077 23:44:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.077 23:44:35 -- common/autotest_common.sh@10 -- # set +x 00:31:15.077 23:44:35 -- host/discovery.sh@59 -- # sort 00:31:15.077 23:44:35 -- host/discovery.sh@59 -- # xargs 00:31:15.077 23:44:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.077 23:44:36 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.077 23:44:36 -- host/discovery.sh@119 -- # get_bdev_list 00:31:15.077 23:44:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.077 23:44:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:15.077 23:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.077 23:44:36 -- common/autotest_common.sh@10 -- # set +x 00:31:15.077 23:44:36 -- host/discovery.sh@55 -- # sort 00:31:15.077 23:44:36 -- host/discovery.sh@55 -- # xargs 00:31:15.336 23:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.336 23:44:36 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:15.336 23:44:36 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:31:15.336 23:44:36 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:15.336 23:44:36 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:15.336 23:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.336 23:44:36 -- common/autotest_common.sh@10 -- # set +x 00:31:15.336 23:44:36 -- host/discovery.sh@63 -- # sort -n 00:31:15.336 23:44:36 -- host/discovery.sh@63 -- # xargs 00:31:15.336 23:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.336 23:44:36 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:15.336 23:44:36 -- host/discovery.sh@121 -- # get_notification_count 00:31:15.336 23:44:36 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:15.336 23:44:36 -- host/discovery.sh@74 -- # jq '. | length' 00:31:15.336 23:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.336 23:44:36 -- common/autotest_common.sh@10 -- # set +x 00:31:15.336 23:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.336 23:44:36 -- host/discovery.sh@74 -- # notification_count=0 00:31:15.336 23:44:36 -- host/discovery.sh@75 -- # notify_id=2 00:31:15.336 23:44:36 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:31:15.336 23:44:36 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:15.336 23:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.336 23:44:36 -- common/autotest_common.sh@10 -- # set +x 00:31:15.336 [2024-07-11 23:44:36.145265] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:15.336 [2024-07-11 23:44:36.145298] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:15.336 23:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.336 23:44:36 -- host/discovery.sh@127 -- # sleep 1 00:31:15.336 [2024-07-11 23:44:36.151254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.336 [2024-07-11 23:44:36.151288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.336 [2024-07-11 23:44:36.151307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.336 [2024-07-11 23:44:36.151323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.336 [2024-07-11 23:44:36.151337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.336 [2024-07-11 23:44:36.151353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.336 [2024-07-11 23:44:36.151367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.336 [2024-07-11 23:44:36.151382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.336 [2024-07-11 23:44:36.151397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c23a0 is same with the state(5) to be set 00:31:15.336 [2024-07-11 23:44:36.161258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c23a0 (9): Bad file descriptor 00:31:15.336 [2024-07-11 23:44:36.171303] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.336 [2024-07-11 23:44:36.171587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.336 [2024-07-11 23:44:36.171834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.336 [2024-07-11 23:44:36.171886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c23a0 with addr=10.0.0.2, port=4420 00:31:15.336 [2024-07-11 23:44:36.171906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c23a0 is same with the state(5) to be set 00:31:15.336 [2024-07-11 23:44:36.171932] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c23a0 (9): Bad file descriptor 00:31:15.336 [2024-07-11 23:44:36.171971] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.336 [2024-07-11 23:44:36.171992] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.336 [2024-07-11 23:44:36.172009] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.336 [2024-07-11 23:44:36.172031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.336 [2024-07-11 23:44:36.181384] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.336 [2024-07-11 23:44:36.181645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.336 [2024-07-11 23:44:36.181897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.336 [2024-07-11 23:44:36.181942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c23a0 with addr=10.0.0.2, port=4420 00:31:15.336 [2024-07-11 23:44:36.181960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c23a0 is same with the state(5) to be set 00:31:15.336 [2024-07-11 23:44:36.181984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c23a0 (9): Bad file descriptor 00:31:15.336 [2024-07-11 23:44:36.182022] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.336 [2024-07-11 23:44:36.182043] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.336 [2024-07-11 23:44:36.182058] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.336 [2024-07-11 23:44:36.182079] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.336 [2024-07-11 23:44:36.191461] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.336 [2024-07-11 23:44:36.191723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.336 [2024-07-11 23:44:36.191999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.336 [2024-07-11 23:44:36.192045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c23a0 with addr=10.0.0.2, port=4420 00:31:15.336 [2024-07-11 23:44:36.192063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c23a0 is same with the state(5) to be set 00:31:15.336 [2024-07-11 23:44:36.192088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c23a0 (9): Bad file descriptor 00:31:15.336 [2024-07-11 23:44:36.192149] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.336 [2024-07-11 23:44:36.192184] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.336 [2024-07-11 23:44:36.192199] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.336 [2024-07-11 23:44:36.192221] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.336 [2024-07-11 23:44:36.201543] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.336 [2024-07-11 23:44:36.201793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.336 [2024-07-11 23:44:36.202049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.336 [2024-07-11 23:44:36.202094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c23a0 with addr=10.0.0.2, port=4420 00:31:15.336 [2024-07-11 23:44:36.202118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c23a0 is same with the state(5) to be set 00:31:15.336 [2024-07-11 23:44:36.202153] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c23a0 (9): Bad file descriptor 00:31:15.336 [2024-07-11 23:44:36.202194] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.336 [2024-07-11 23:44:36.202215] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.336 [2024-07-11 23:44:36.202231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.336 [2024-07-11 23:44:36.202252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.336 [2024-07-11 23:44:36.211618] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.336 [2024-07-11 23:44:36.211891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.336 [2024-07-11 23:44:36.212193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.336 [2024-07-11 23:44:36.212223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c23a0 with addr=10.0.0.2, port=4420 00:31:15.336 [2024-07-11 23:44:36.212240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c23a0 is same with the state(5) to be set 00:31:15.336 [2024-07-11 23:44:36.212265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c23a0 (9): Bad file descriptor 00:31:15.336 [2024-07-11 23:44:36.212302] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.336 [2024-07-11 23:44:36.212323] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.336 [2024-07-11 23:44:36.212339] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.336 [2024-07-11 23:44:36.212361] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.336 [2024-07-11 23:44:36.221693] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.336 [2024-07-11 23:44:36.221964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.336 [2024-07-11 23:44:36.222215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.336 [2024-07-11 23:44:36.222245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c23a0 with addr=10.0.0.2, port=4420 00:31:15.336 [2024-07-11 23:44:36.222263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c23a0 is same with the state(5) to be set 00:31:15.336 [2024-07-11 23:44:36.222287] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c23a0 (9): Bad file descriptor 00:31:15.336 [2024-07-11 23:44:36.222310] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.336 [2024-07-11 23:44:36.222324] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.336 [2024-07-11 23:44:36.222340] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.336 [2024-07-11 23:44:36.222377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.336 [2024-07-11 23:44:36.231769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.336 [2024-07-11 23:44:36.232727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.336 [2024-07-11 23:44:36.232994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.336 [2024-07-11 23:44:36.233042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c23a0 with addr=10.0.0.2, port=4420 00:31:15.336 [2024-07-11 23:44:36.233060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c23a0 is same with the state(5) to be set 00:31:15.336 [2024-07-11 23:44:36.233091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c23a0 (9): Bad file descriptor 00:31:15.337 [2024-07-11 23:44:36.233160] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:15.337 [2024-07-11 23:44:36.233189] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:15.337 [2024-07-11 23:44:36.233249] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.337 [2024-07-11 23:44:36.233274] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.337 [2024-07-11 23:44:36.233290] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.337 [2024-07-11 23:44:36.233315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.270 23:44:37 -- host/discovery.sh@128 -- # get_subsystem_names 00:31:16.270 23:44:37 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:16.270 23:44:37 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:16.270 23:44:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.270 23:44:37 -- common/autotest_common.sh@10 -- # set +x 00:31:16.270 23:44:37 -- host/discovery.sh@59 -- # sort 00:31:16.270 23:44:37 -- host/discovery.sh@59 -- # xargs 00:31:16.270 23:44:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.270 23:44:37 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.270 23:44:37 -- host/discovery.sh@129 -- # get_bdev_list 00:31:16.270 23:44:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.270 23:44:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:16.270 23:44:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.270 23:44:37 -- common/autotest_common.sh@10 -- # set +x 00:31:16.270 23:44:37 -- host/discovery.sh@55 -- # sort 00:31:16.270 23:44:37 -- host/discovery.sh@55 -- # xargs 00:31:16.528 23:44:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.528 23:44:37 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:16.528 23:44:37 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:31:16.528 23:44:37 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:16.528 23:44:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.528 23:44:37 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:16.528 23:44:37 -- common/autotest_common.sh@10 -- # set +x 00:31:16.528 23:44:37 -- host/discovery.sh@63 -- # sort -n 00:31:16.528 23:44:37 -- host/discovery.sh@63 -- # xargs 00:31:16.528 23:44:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.528 23:44:37 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:31:16.528 23:44:37 -- host/discovery.sh@131 -- # get_notification_count 00:31:16.528 23:44:37 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:16.528 23:44:37 -- host/discovery.sh@74 -- # jq '. | length' 00:31:16.528 23:44:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.528 23:44:37 -- common/autotest_common.sh@10 -- # set +x 00:31:16.528 23:44:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.528 23:44:37 -- host/discovery.sh@74 -- # notification_count=0 00:31:16.528 23:44:37 -- host/discovery.sh@75 -- # notify_id=2 00:31:16.528 23:44:37 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:31:16.528 23:44:37 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:16.528 23:44:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.528 23:44:37 -- common/autotest_common.sh@10 -- # set +x 00:31:16.528 23:44:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.528 23:44:37 -- host/discovery.sh@135 -- # sleep 1 00:31:17.462 23:44:38 -- host/discovery.sh@136 -- # get_subsystem_names 00:31:17.720 23:44:38 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:17.720 23:44:38 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:17.720 23:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.720 23:44:38 -- common/autotest_common.sh@10 -- # set +x 00:31:17.720 23:44:38 -- host/discovery.sh@59 -- # sort 00:31:17.720 23:44:38 -- host/discovery.sh@59 -- # xargs 00:31:17.721 23:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.721 23:44:38 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:31:17.721 23:44:38 -- host/discovery.sh@137 -- # get_bdev_list 00:31:17.721 23:44:38 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:17.721 23:44:38 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:17.721 23:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.721 23:44:38 -- common/autotest_common.sh@10 -- # set +x 00:31:17.721 23:44:38 -- host/discovery.sh@55 -- # sort 00:31:17.721 23:44:38 -- host/discovery.sh@55 -- # xargs 00:31:17.721 23:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.721 23:44:38 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:31:17.721 23:44:38 -- host/discovery.sh@138 -- # get_notification_count 00:31:17.721 23:44:38 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:17.721 23:44:38 -- host/discovery.sh@74 -- # jq '. | length' 00:31:17.721 23:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.721 23:44:38 -- common/autotest_common.sh@10 -- # set +x 00:31:17.721 23:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.721 23:44:38 -- host/discovery.sh@74 -- # notification_count=2 00:31:17.721 23:44:38 -- host/discovery.sh@75 -- # notify_id=4 00:31:17.721 23:44:38 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:31:17.721 23:44:38 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:17.721 23:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.721 23:44:38 -- common/autotest_common.sh@10 -- # set +x 00:31:18.655 [2024-07-11 23:44:39.586622] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:18.655 [2024-07-11 23:44:39.586651] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:18.655 [2024-07-11 23:44:39.586675] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:18.913 [2024-07-11 23:44:39.673948] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:19.172 [2024-07-11 23:44:39.945078] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:19.172 [2024-07-11 23:44:39.945118] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:19.172 23:44:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:19.173 23:44:39 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:19.173 23:44:39 -- common/autotest_common.sh@640 -- # local es=0 00:31:19.173 23:44:39 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:19.173 23:44:39 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:19.173 23:44:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:19.173 23:44:39 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:19.173 23:44:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:19.173 23:44:39 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:19.173 23:44:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:19.173 23:44:39 -- common/autotest_common.sh@10 -- # set +x 00:31:19.173 request: 00:31:19.173 { 00:31:19.173 "name": "nvme", 00:31:19.173 "trtype": "tcp", 00:31:19.173 "traddr": "10.0.0.2", 00:31:19.173 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:19.173 "adrfam": "ipv4", 00:31:19.173 "trsvcid": "8009", 00:31:19.173 "wait_for_attach": true, 00:31:19.173 "method": "bdev_nvme_start_discovery", 00:31:19.173 "req_id": 1 00:31:19.173 } 00:31:19.173 Got JSON-RPC error response 00:31:19.173 response: 00:31:19.173 { 00:31:19.173 "code": -17, 00:31:19.173 "message": "File exists" 00:31:19.173 } 00:31:19.173 23:44:39 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:19.173 23:44:39 -- common/autotest_common.sh@643 -- # es=1 00:31:19.173 23:44:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:19.173 23:44:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:19.173 23:44:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:19.173 23:44:39 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:31:19.173 23:44:39 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:19.173 23:44:39 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:19.173 23:44:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:19.173 23:44:39 -- common/autotest_common.sh@10 -- # set +x 00:31:19.173 23:44:39 -- host/discovery.sh@67 -- # sort 00:31:19.173 23:44:39 -- host/discovery.sh@67 -- # xargs 00:31:19.173 23:44:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:19.173 23:44:40 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:31:19.173 23:44:40 -- host/discovery.sh@147 -- # get_bdev_list 00:31:19.173 23:44:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.173 23:44:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:19.173 23:44:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:19.173 23:44:40 -- common/autotest_common.sh@10 -- # set +x 00:31:19.173 23:44:40 -- host/discovery.sh@55 -- # sort 00:31:19.173 23:44:40 -- host/discovery.sh@55 -- # xargs 00:31:19.173 23:44:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:19.173 23:44:40 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:19.173 23:44:40 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:19.173 23:44:40 -- common/autotest_common.sh@640 -- # local es=0 00:31:19.173 23:44:40 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:19.173 23:44:40 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:19.173 23:44:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:19.173 23:44:40 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:19.173 23:44:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:19.173 23:44:40 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:19.173 23:44:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:19.173 23:44:40 -- common/autotest_common.sh@10 -- # set +x 00:31:19.173 request: 00:31:19.173 { 00:31:19.173 "name": "nvme_second", 00:31:19.173 "trtype": "tcp", 00:31:19.173 "traddr": "10.0.0.2", 00:31:19.173 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:19.173 "adrfam": "ipv4", 00:31:19.173 "trsvcid": "8009", 00:31:19.173 "wait_for_attach": true, 00:31:19.173 "method": "bdev_nvme_start_discovery", 00:31:19.173 "req_id": 1 00:31:19.173 } 00:31:19.173 Got JSON-RPC error response 00:31:19.173 response: 00:31:19.173 { 00:31:19.173 "code": -17, 00:31:19.173 "message": "File exists" 00:31:19.173 } 00:31:19.173 23:44:40 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:19.173 23:44:40 -- common/autotest_common.sh@643 -- # es=1 00:31:19.173 23:44:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:19.173 23:44:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:19.173 23:44:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:19.173 23:44:40 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:31:19.173 23:44:40 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:19.173 23:44:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:19.173 23:44:40 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:19.173 23:44:40 -- common/autotest_common.sh@10 -- # set +x 00:31:19.173 23:44:40 -- host/discovery.sh@67 -- # sort 00:31:19.173 23:44:40 -- host/discovery.sh@67 -- # xargs 00:31:19.173 23:44:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:19.173 23:44:40 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:31:19.173 23:44:40 -- host/discovery.sh@153 -- # get_bdev_list 00:31:19.173 23:44:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:19.173 23:44:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.173 23:44:40 -- host/discovery.sh@55 -- # sort 00:31:19.173 23:44:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:19.173 23:44:40 -- common/autotest_common.sh@10 -- # set +x 00:31:19.173 23:44:40 -- host/discovery.sh@55 -- # xargs 00:31:19.431 23:44:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:19.431 23:44:40 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:19.431 23:44:40 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:19.431 23:44:40 -- common/autotest_common.sh@640 -- # local es=0 00:31:19.431 23:44:40 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:19.431 23:44:40 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:19.431 23:44:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:19.431 23:44:40 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:19.431 23:44:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:19.431 23:44:40 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:19.431 23:44:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:19.431 23:44:40 -- common/autotest_common.sh@10 -- # set +x 00:31:20.365 [2024-07-11 23:44:41.164714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.365 [2024-07-11 23:44:41.165020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.365 [2024-07-11 23:44:41.165069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2438eb0 with addr=10.0.0.2, port=8010 00:31:20.365 [2024-07-11 23:44:41.165103] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:20.365 [2024-07-11 23:44:41.165120] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:20.365 [2024-07-11 23:44:41.165136] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:21.299 [2024-07-11 23:44:42.167109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-07-11 23:44:42.167480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.299 [2024-07-11 23:44:42.167542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2438eb0 with addr=10.0.0.2, port=8010 00:31:21.299 [2024-07-11 23:44:42.167566] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:21.299 [2024-07-11 23:44:42.167580] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:21.299 [2024-07-11 23:44:42.167595] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:22.234 [2024-07-11 23:44:43.169187] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:22.234 request: 00:31:22.234 { 00:31:22.234 "name": "nvme_second", 00:31:22.234 "trtype": "tcp", 00:31:22.234 "traddr": "10.0.0.2", 00:31:22.234 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:22.234 "adrfam": "ipv4", 00:31:22.234 "trsvcid": "8010", 00:31:22.234 "attach_timeout_ms": 3000, 00:31:22.234 "method": "bdev_nvme_start_discovery", 00:31:22.234 "req_id": 1 00:31:22.234 } 00:31:22.234 Got JSON-RPC error response 00:31:22.234 response: 00:31:22.234 { 00:31:22.234 "code": -110, 00:31:22.234 "message": "Connection timed out" 00:31:22.234 } 00:31:22.234 23:44:43 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:22.234 23:44:43 -- common/autotest_common.sh@643 -- # es=1 00:31:22.234 23:44:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:22.234 23:44:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:22.234 23:44:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:22.234 23:44:43 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:31:22.234 23:44:43 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:22.234 23:44:43 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:22.234 23:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.234 23:44:43 -- common/autotest_common.sh@10 -- # set +x 00:31:22.234 23:44:43 -- host/discovery.sh@67 -- # sort 00:31:22.234 23:44:43 -- host/discovery.sh@67 -- # xargs 00:31:22.493 23:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.493 23:44:43 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:31:22.493 23:44:43 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:31:22.493 23:44:43 -- host/discovery.sh@162 -- # kill 372493 00:31:22.493 23:44:43 -- host/discovery.sh@163 -- # nvmftestfini 00:31:22.493 23:44:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:22.493 23:44:43 -- nvmf/common.sh@116 -- # sync 00:31:22.493 23:44:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:22.493 23:44:43 -- nvmf/common.sh@119 -- # set +e 00:31:22.493 23:44:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:22.493 23:44:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:22.493 rmmod nvme_tcp 00:31:22.493 rmmod nvme_fabrics 00:31:22.493 rmmod nvme_keyring 00:31:22.493 23:44:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:22.493 23:44:43 -- nvmf/common.sh@123 -- # set -e 00:31:22.493 23:44:43 -- nvmf/common.sh@124 -- # return 0 00:31:22.493 23:44:43 -- nvmf/common.sh@477 -- # '[' -n 372340 ']' 00:31:22.493 23:44:43 -- nvmf/common.sh@478 -- # killprocess 372340 00:31:22.493 23:44:43 -- common/autotest_common.sh@926 -- # '[' -z 372340 ']' 00:31:22.493 23:44:43 -- common/autotest_common.sh@930 -- # kill -0 372340 00:31:22.493 23:44:43 -- common/autotest_common.sh@931 -- # uname 00:31:22.493 23:44:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:22.493 23:44:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 372340 00:31:22.493 23:44:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:22.493 23:44:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:22.493 23:44:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 372340' 00:31:22.493 killing process with pid 372340 00:31:22.493 23:44:43 -- common/autotest_common.sh@945 -- # kill 372340 00:31:22.493 23:44:43 -- common/autotest_common.sh@950 -- # wait 372340 00:31:22.751 23:44:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:22.751 23:44:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:22.751 23:44:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:22.751 23:44:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:22.751 23:44:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:22.751 23:44:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.751 23:44:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.751 23:44:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.289 23:44:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:25.289 00:31:25.289 real 0m18.986s 00:31:25.289 user 0m28.707s 00:31:25.289 sys 0m3.878s 00:31:25.289 23:44:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:25.289 23:44:45 -- common/autotest_common.sh@10 -- # set +x 00:31:25.289 ************************************ 00:31:25.289 END TEST nvmf_discovery 00:31:25.289 ************************************ 00:31:25.289 23:44:45 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:25.289 23:44:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:25.289 23:44:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:25.289 23:44:45 -- common/autotest_common.sh@10 -- # set +x 00:31:25.289 ************************************ 00:31:25.289 START TEST nvmf_discovery_remove_ifc 00:31:25.289 ************************************ 00:31:25.289 23:44:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:25.289 * Looking for test storage... 00:31:25.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:25.289 23:44:45 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.289 23:44:45 -- nvmf/common.sh@7 -- # uname -s 00:31:25.289 23:44:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.289 23:44:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.289 23:44:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.289 23:44:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.289 23:44:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.289 23:44:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.289 23:44:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.289 23:44:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.289 23:44:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.289 23:44:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.289 23:44:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:25.289 23:44:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:25.289 23:44:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.289 23:44:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.289 23:44:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.289 23:44:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.289 23:44:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.289 23:44:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.289 23:44:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.289 23:44:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.289 23:44:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.289 23:44:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.289 23:44:45 -- paths/export.sh@5 -- # export PATH 00:31:25.289 23:44:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.289 23:44:45 -- nvmf/common.sh@46 -- # : 0 00:31:25.289 23:44:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:25.289 23:44:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:25.289 23:44:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:25.289 23:44:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.289 23:44:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.289 23:44:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:25.289 23:44:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:25.289 23:44:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:25.289 23:44:45 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:25.289 23:44:45 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:25.289 23:44:45 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:25.289 23:44:45 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:25.289 23:44:45 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:25.289 23:44:45 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:25.289 23:44:45 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:25.289 23:44:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:25.289 23:44:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.289 23:44:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:25.289 23:44:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:25.289 23:44:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:25.289 23:44:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.289 23:44:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.289 23:44:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.289 23:44:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:25.289 23:44:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:25.289 23:44:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:25.289 23:44:45 -- common/autotest_common.sh@10 -- # set +x 00:31:27.822 23:44:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:27.822 23:44:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:27.822 23:44:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:27.822 23:44:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:27.822 23:44:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:27.822 23:44:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:27.822 23:44:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:27.822 23:44:48 -- nvmf/common.sh@294 -- # net_devs=() 00:31:27.822 23:44:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:27.822 23:44:48 -- nvmf/common.sh@295 -- # e810=() 00:31:27.822 23:44:48 -- nvmf/common.sh@295 -- # local -ga e810 00:31:27.822 23:44:48 -- nvmf/common.sh@296 -- # x722=() 00:31:27.822 23:44:48 -- nvmf/common.sh@296 -- # local -ga x722 00:31:27.822 23:44:48 -- nvmf/common.sh@297 -- # mlx=() 00:31:27.822 23:44:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:27.822 23:44:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:27.822 23:44:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:27.822 23:44:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:27.822 23:44:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:27.822 23:44:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:27.822 23:44:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:27.822 23:44:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:27.822 23:44:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:27.822 23:44:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:27.822 23:44:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:27.822 23:44:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:27.822 23:44:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:27.822 23:44:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:27.822 23:44:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:27.822 23:44:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:27.822 23:44:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:27.822 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:27.822 23:44:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:27.822 23:44:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:27.822 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:27.822 23:44:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:27.822 23:44:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:27.822 23:44:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.822 23:44:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:27.822 23:44:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.822 23:44:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:27.822 Found net devices under 0000:84:00.0: cvl_0_0 00:31:27.822 23:44:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.822 23:44:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:27.822 23:44:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.822 23:44:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:27.822 23:44:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.822 23:44:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:27.822 Found net devices under 0000:84:00.1: cvl_0_1 00:31:27.822 23:44:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.822 23:44:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:27.822 23:44:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:27.822 23:44:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:27.822 23:44:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:27.822 23:44:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.822 23:44:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:27.822 23:44:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:27.822 23:44:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:27.823 23:44:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:27.823 23:44:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:27.823 23:44:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:27.823 23:44:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:27.823 23:44:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.823 23:44:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:27.823 23:44:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:27.823 23:44:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:27.823 23:44:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:27.823 23:44:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:27.823 23:44:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:27.823 23:44:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:27.823 23:44:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:27.823 23:44:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:27.823 23:44:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:27.823 23:44:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:27.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:27.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:31:27.823 00:31:27.823 --- 10.0.0.2 ping statistics --- 00:31:27.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.823 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:31:27.823 23:44:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:27.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:27.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:31:27.823 00:31:27.823 --- 10.0.0.1 ping statistics --- 00:31:27.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.823 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:31:27.823 23:44:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:27.823 23:44:48 -- nvmf/common.sh@410 -- # return 0 00:31:27.823 23:44:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:27.823 23:44:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:27.823 23:44:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:27.823 23:44:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:27.823 23:44:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:27.823 23:44:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:27.823 23:44:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:27.823 23:44:48 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:27.823 23:44:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:27.823 23:44:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:27.823 23:44:48 -- common/autotest_common.sh@10 -- # set +x 00:31:27.823 23:44:48 -- nvmf/common.sh@469 -- # nvmfpid=376108 00:31:27.823 23:44:48 -- nvmf/common.sh@470 -- # waitforlisten 376108 00:31:27.823 23:44:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:27.823 23:44:48 -- common/autotest_common.sh@819 -- # '[' -z 376108 ']' 00:31:27.823 23:44:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.823 23:44:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:27.823 23:44:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.823 23:44:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:27.823 23:44:48 -- common/autotest_common.sh@10 -- # set +x 00:31:27.823 [2024-07-11 23:44:48.489189] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:27.823 [2024-07-11 23:44:48.489280] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.823 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.823 [2024-07-11 23:44:48.580083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.823 [2024-07-11 23:44:48.688222] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:27.823 [2024-07-11 23:44:48.688378] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.823 [2024-07-11 23:44:48.688398] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.823 [2024-07-11 23:44:48.688414] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.823 [2024-07-11 23:44:48.688461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.760 23:44:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:28.760 23:44:49 -- common/autotest_common.sh@852 -- # return 0 00:31:28.760 23:44:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:28.760 23:44:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:28.760 23:44:49 -- common/autotest_common.sh@10 -- # set +x 00:31:28.760 23:44:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.760 23:44:49 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:28.760 23:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:28.760 23:44:49 -- common/autotest_common.sh@10 -- # set +x 00:31:28.760 [2024-07-11 23:44:49.686922] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.760 [2024-07-11 23:44:49.695171] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:28.760 null0 00:31:29.018 [2024-07-11 23:44:49.727056] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.018 23:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.018 23:44:49 -- host/discovery_remove_ifc.sh@59 -- # hostpid=376270 00:31:29.018 23:44:49 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:29.018 23:44:49 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 376270 /tmp/host.sock 00:31:29.018 23:44:49 -- common/autotest_common.sh@819 -- # '[' -z 376270 ']' 00:31:29.018 23:44:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:29.018 23:44:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:29.018 23:44:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:29.018 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:29.018 23:44:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:29.018 23:44:49 -- common/autotest_common.sh@10 -- # set +x 00:31:29.018 [2024-07-11 23:44:49.800605] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:29.018 [2024-07-11 23:44:49.800697] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376270 ] 00:31:29.018 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.018 [2024-07-11 23:44:49.876411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.018 [2024-07-11 23:44:49.967535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:29.018 [2024-07-11 23:44:49.967721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.276 23:44:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:29.276 23:44:50 -- common/autotest_common.sh@852 -- # return 0 00:31:29.276 23:44:50 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:29.276 23:44:50 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:29.276 23:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.276 23:44:50 -- common/autotest_common.sh@10 -- # set +x 00:31:29.276 23:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.276 23:44:50 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:29.276 23:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.276 23:44:50 -- common/autotest_common.sh@10 -- # set +x 00:31:29.276 23:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.276 23:44:50 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:29.276 23:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.276 23:44:50 -- common/autotest_common.sh@10 -- # set +x 00:31:30.647 [2024-07-11 23:44:51.206240] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:30.647 [2024-07-11 23:44:51.206274] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:30.647 [2024-07-11 23:44:51.206302] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:30.648 [2024-07-11 23:44:51.332698] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:30.648 [2024-07-11 23:44:51.435612] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:30.648 [2024-07-11 23:44:51.435672] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:30.648 [2024-07-11 23:44:51.435717] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:30.648 [2024-07-11 23:44:51.435742] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:30.648 [2024-07-11 23:44:51.435767] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:30.648 23:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.648 23:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:30.648 23:44:51 -- common/autotest_common.sh@10 -- # set +x 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:30.648 [2024-07-11 23:44:51.443693] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x16fec50 was disconnected and freed. delete nvme_qpair. 00:31:30.648 23:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:30.648 23:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.648 23:44:51 -- common/autotest_common.sh@10 -- # set +x 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:30.648 23:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:30.648 23:44:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:32.054 23:44:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:32.054 23:44:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.054 23:44:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:32.054 23:44:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.054 23:44:52 -- common/autotest_common.sh@10 -- # set +x 00:31:32.054 23:44:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:32.054 23:44:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:32.054 23:44:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.054 23:44:52 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:32.054 23:44:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:32.986 23:44:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:32.986 23:44:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.986 23:44:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:32.986 23:44:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.986 23:44:53 -- common/autotest_common.sh@10 -- # set +x 00:31:32.986 23:44:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:32.986 23:44:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:32.986 23:44:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.986 23:44:53 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:32.986 23:44:53 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:33.919 23:44:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:33.919 23:44:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.919 23:44:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.919 23:44:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:33.919 23:44:54 -- common/autotest_common.sh@10 -- # set +x 00:31:33.919 23:44:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:33.919 23:44:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:33.919 23:44:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.919 23:44:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:33.919 23:44:54 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:34.850 23:44:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:34.850 23:44:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.850 23:44:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:34.850 23:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.850 23:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:34.850 23:44:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:34.850 23:44:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:34.850 23:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.850 23:44:55 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:34.850 23:44:55 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:36.223 23:44:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:36.223 23:44:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.223 23:44:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.223 23:44:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:36.223 23:44:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:36.223 23:44:56 -- common/autotest_common.sh@10 -- # set +x 00:31:36.223 23:44:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:36.223 23:44:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.223 23:44:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:36.223 23:44:56 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:36.223 [2024-07-11 23:44:56.876388] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:36.223 [2024-07-11 23:44:56.876459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.223 [2024-07-11 23:44:56.876483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.223 [2024-07-11 23:44:56.876502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.223 [2024-07-11 23:44:56.876518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.223 [2024-07-11 23:44:56.876534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.223 [2024-07-11 23:44:56.876549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.223 [2024-07-11 23:44:56.876565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.223 [2024-07-11 23:44:56.876580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.223 [2024-07-11 23:44:56.876596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.223 [2024-07-11 23:44:56.876621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.223 [2024-07-11 23:44:56.876637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5060 is same with the state(5) to be set 00:31:36.223 [2024-07-11 23:44:56.886408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c5060 (9): Bad file descriptor 00:31:36.223 [2024-07-11 23:44:56.896456] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:37.161 23:44:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:37.161 23:44:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:37.161 23:44:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.161 23:44:57 -- common/autotest_common.sh@10 -- # set +x 00:31:37.161 23:44:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:37.161 23:44:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:37.161 23:44:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:37.161 [2024-07-11 23:44:57.910152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:38.093 [2024-07-11 23:44:58.932204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:38.093 [2024-07-11 23:44:58.932256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c5060 with addr=10.0.0.2, port=4420 00:31:38.093 [2024-07-11 23:44:58.932284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5060 is same with the state(5) to be set 00:31:38.093 [2024-07-11 23:44:58.932319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:38.093 [2024-07-11 23:44:58.932337] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:38.093 [2024-07-11 23:44:58.932352] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:38.093 [2024-07-11 23:44:58.932371] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:38.093 [2024-07-11 23:44:58.932818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c5060 (9): Bad file descriptor 00:31:38.093 [2024-07-11 23:44:58.932860] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.093 [2024-07-11 23:44:58.932903] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:38.093 [2024-07-11 23:44:58.932940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.093 [2024-07-11 23:44:58.932963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.093 [2024-07-11 23:44:58.932983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.093 [2024-07-11 23:44:58.932998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.093 [2024-07-11 23:44:58.933012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.093 [2024-07-11 23:44:58.933026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.093 [2024-07-11 23:44:58.933042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.093 [2024-07-11 23:44:58.933056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.093 [2024-07-11 23:44:58.933072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.093 [2024-07-11 23:44:58.933087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:38.093 [2024-07-11 23:44:58.933110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:38.093 [2024-07-11 23:44:58.933321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c5470 (9): Bad file descriptor 00:31:38.093 [2024-07-11 23:44:58.934343] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:38.093 [2024-07-11 23:44:58.934370] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:38.093 23:44:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.093 23:44:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:38.093 23:44:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:39.024 23:44:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:39.024 23:44:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:39.024 23:44:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.024 23:44:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:39.024 23:44:59 -- common/autotest_common.sh@10 -- # set +x 00:31:39.024 23:44:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:39.024 23:44:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:39.024 23:44:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.283 23:45:00 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:39.283 23:45:00 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.283 23:45:00 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.283 23:45:00 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:39.283 23:45:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:39.283 23:45:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:39.283 23:45:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:39.283 23:45:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.283 23:45:00 -- common/autotest_common.sh@10 -- # set +x 00:31:39.283 23:45:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:39.283 23:45:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:39.283 23:45:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.283 23:45:00 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:39.283 23:45:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:40.216 [2024-07-11 23:45:00.992241] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:40.216 [2024-07-11 23:45:00.992276] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:40.216 [2024-07-11 23:45:00.992303] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:40.216 23:45:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:40.216 23:45:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:40.216 23:45:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.216 23:45:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:40.216 23:45:01 -- common/autotest_common.sh@10 -- # set +x 00:31:40.216 23:45:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:40.216 23:45:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:40.216 23:45:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.216 [2024-07-11 23:45:01.119714] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:40.473 23:45:01 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:40.473 23:45:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:40.473 [2024-07-11 23:45:01.301213] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:40.473 [2024-07-11 23:45:01.301276] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:40.473 [2024-07-11 23:45:01.301315] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:40.473 [2024-07-11 23:45:01.301340] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:40.473 [2024-07-11 23:45:01.301355] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:40.474 [2024-07-11 23:45:01.309902] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x16d2ca0 was disconnected and freed. delete nvme_qpair. 00:31:41.406 23:45:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:41.406 23:45:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:41.406 23:45:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:41.406 23:45:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.406 23:45:02 -- common/autotest_common.sh@10 -- # set +x 00:31:41.406 23:45:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:41.406 23:45:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:41.406 23:45:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.406 23:45:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:41.406 23:45:02 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:41.406 23:45:02 -- host/discovery_remove_ifc.sh@90 -- # killprocess 376270 00:31:41.406 23:45:02 -- common/autotest_common.sh@926 -- # '[' -z 376270 ']' 00:31:41.406 23:45:02 -- common/autotest_common.sh@930 -- # kill -0 376270 00:31:41.406 23:45:02 -- common/autotest_common.sh@931 -- # uname 00:31:41.406 23:45:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:41.406 23:45:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 376270 00:31:41.406 23:45:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:41.406 23:45:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:41.406 23:45:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 376270' 00:31:41.406 killing process with pid 376270 00:31:41.406 23:45:02 -- common/autotest_common.sh@945 -- # kill 376270 00:31:41.406 23:45:02 -- common/autotest_common.sh@950 -- # wait 376270 00:31:41.663 23:45:02 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:41.663 23:45:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:41.663 23:45:02 -- nvmf/common.sh@116 -- # sync 00:31:41.663 23:45:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:41.663 23:45:02 -- nvmf/common.sh@119 -- # set +e 00:31:41.663 23:45:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:41.663 23:45:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:41.663 rmmod nvme_tcp 00:31:41.663 rmmod nvme_fabrics 00:31:41.663 rmmod nvme_keyring 00:31:41.663 23:45:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:41.663 23:45:02 -- nvmf/common.sh@123 -- # set -e 00:31:41.663 23:45:02 -- nvmf/common.sh@124 -- # return 0 00:31:41.663 23:45:02 -- nvmf/common.sh@477 -- # '[' -n 376108 ']' 00:31:41.663 23:45:02 -- nvmf/common.sh@478 -- # killprocess 376108 00:31:41.663 23:45:02 -- common/autotest_common.sh@926 -- # '[' -z 376108 ']' 00:31:41.663 23:45:02 -- common/autotest_common.sh@930 -- # kill -0 376108 00:31:41.663 23:45:02 -- common/autotest_common.sh@931 -- # uname 00:31:41.663 23:45:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:41.663 23:45:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 376108 00:31:41.663 23:45:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:41.663 23:45:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:41.663 23:45:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 376108' 00:31:41.663 killing process with pid 376108 00:31:41.663 23:45:02 -- common/autotest_common.sh@945 -- # kill 376108 00:31:41.663 23:45:02 -- common/autotest_common.sh@950 -- # wait 376108 00:31:41.921 23:45:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:41.921 23:45:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:41.921 23:45:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:41.921 23:45:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:41.921 23:45:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:41.921 23:45:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.921 23:45:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:41.921 23:45:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:44.454 23:45:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:44.454 00:31:44.454 real 0m19.204s 00:31:44.454 user 0m26.166s 00:31:44.454 sys 0m3.623s 00:31:44.454 23:45:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:44.454 23:45:04 -- common/autotest_common.sh@10 -- # set +x 00:31:44.454 ************************************ 00:31:44.454 END TEST nvmf_discovery_remove_ifc 00:31:44.454 ************************************ 00:31:44.454 23:45:04 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:31:44.454 23:45:04 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:44.454 23:45:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:44.454 23:45:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:44.454 23:45:04 -- common/autotest_common.sh@10 -- # set +x 00:31:44.454 ************************************ 00:31:44.454 START TEST nvmf_digest 00:31:44.454 ************************************ 00:31:44.454 23:45:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:44.454 * Looking for test storage... 00:31:44.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:44.454 23:45:04 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:44.454 23:45:04 -- nvmf/common.sh@7 -- # uname -s 00:31:44.454 23:45:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:44.454 23:45:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:44.454 23:45:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:44.454 23:45:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:44.454 23:45:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:44.454 23:45:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:44.454 23:45:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:44.454 23:45:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:44.454 23:45:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:44.454 23:45:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:44.454 23:45:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:44.454 23:45:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:44.454 23:45:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:44.454 23:45:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:44.454 23:45:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:44.454 23:45:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:44.455 23:45:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:44.455 23:45:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:44.455 23:45:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:44.455 23:45:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.455 23:45:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.455 23:45:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.455 23:45:05 -- paths/export.sh@5 -- # export PATH 00:31:44.455 23:45:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.455 23:45:05 -- nvmf/common.sh@46 -- # : 0 00:31:44.455 23:45:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:44.455 23:45:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:44.455 23:45:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:44.455 23:45:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:44.455 23:45:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:44.455 23:45:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:44.455 23:45:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:44.455 23:45:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:44.455 23:45:05 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:44.455 23:45:05 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:44.455 23:45:05 -- host/digest.sh@16 -- # runtime=2 00:31:44.455 23:45:05 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:31:44.455 23:45:05 -- host/digest.sh@132 -- # nvmftestinit 00:31:44.455 23:45:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:44.455 23:45:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:44.455 23:45:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:44.455 23:45:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:44.455 23:45:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:44.455 23:45:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.455 23:45:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:44.455 23:45:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:44.455 23:45:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:44.455 23:45:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:44.455 23:45:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:44.455 23:45:05 -- common/autotest_common.sh@10 -- # set +x 00:31:46.990 23:45:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:46.990 23:45:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:46.990 23:45:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:46.990 23:45:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:46.990 23:45:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:46.990 23:45:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:46.990 23:45:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:46.990 23:45:07 -- nvmf/common.sh@294 -- # net_devs=() 00:31:46.990 23:45:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:46.990 23:45:07 -- nvmf/common.sh@295 -- # e810=() 00:31:46.990 23:45:07 -- nvmf/common.sh@295 -- # local -ga e810 00:31:46.990 23:45:07 -- nvmf/common.sh@296 -- # x722=() 00:31:46.990 23:45:07 -- nvmf/common.sh@296 -- # local -ga x722 00:31:46.990 23:45:07 -- nvmf/common.sh@297 -- # mlx=() 00:31:46.990 23:45:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:46.990 23:45:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.990 23:45:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.990 23:45:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.990 23:45:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.990 23:45:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.990 23:45:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.990 23:45:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.990 23:45:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.990 23:45:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.990 23:45:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.990 23:45:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.990 23:45:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:46.990 23:45:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:46.990 23:45:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:46.990 23:45:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:46.990 23:45:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:46.990 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:46.990 23:45:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:46.990 23:45:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:46.990 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:46.990 23:45:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:46.990 23:45:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:46.990 23:45:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.990 23:45:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:46.990 23:45:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.990 23:45:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:46.990 Found net devices under 0000:84:00.0: cvl_0_0 00:31:46.990 23:45:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.990 23:45:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:46.990 23:45:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.990 23:45:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:46.990 23:45:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.990 23:45:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:46.990 Found net devices under 0000:84:00.1: cvl_0_1 00:31:46.990 23:45:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.990 23:45:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:46.990 23:45:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:46.990 23:45:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:46.990 23:45:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:46.990 23:45:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:46.990 23:45:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:46.990 23:45:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:46.990 23:45:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:46.990 23:45:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:46.990 23:45:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:46.990 23:45:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:46.990 23:45:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:46.990 23:45:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:46.990 23:45:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:46.990 23:45:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:46.990 23:45:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:46.990 23:45:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:46.990 23:45:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:46.990 23:45:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:46.990 23:45:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.990 23:45:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.990 23:45:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.990 23:45:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:46.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:31:46.990 00:31:46.990 --- 10.0.0.2 ping statistics --- 00:31:46.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.990 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:31:46.990 23:45:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:31:46.990 00:31:46.990 --- 10.0.0.1 ping statistics --- 00:31:46.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.990 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:31:46.990 23:45:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.990 23:45:07 -- nvmf/common.sh@410 -- # return 0 00:31:46.990 23:45:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:46.990 23:45:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.990 23:45:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:46.990 23:45:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.990 23:45:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:46.990 23:45:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:46.990 23:45:07 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:46.990 23:45:07 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:31:46.990 23:45:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:46.990 23:45:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:46.990 23:45:07 -- common/autotest_common.sh@10 -- # set +x 00:31:46.990 ************************************ 00:31:46.990 START TEST nvmf_digest_clean 00:31:46.990 ************************************ 00:31:46.990 23:45:07 -- common/autotest_common.sh@1104 -- # run_digest 00:31:46.990 23:45:07 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:31:46.990 23:45:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:46.990 23:45:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:46.990 23:45:07 -- common/autotest_common.sh@10 -- # set +x 00:31:46.990 23:45:07 -- nvmf/common.sh@469 -- # nvmfpid=379943 00:31:46.990 23:45:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:46.990 23:45:07 -- nvmf/common.sh@470 -- # waitforlisten 379943 00:31:46.990 23:45:07 -- common/autotest_common.sh@819 -- # '[' -z 379943 ']' 00:31:46.990 23:45:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.990 23:45:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:46.990 23:45:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.990 23:45:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:46.990 23:45:07 -- common/autotest_common.sh@10 -- # set +x 00:31:46.990 [2024-07-11 23:45:07.814275] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:46.990 [2024-07-11 23:45:07.814362] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.990 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.990 [2024-07-11 23:45:07.887256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.248 [2024-07-11 23:45:07.972851] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:47.248 [2024-07-11 23:45:07.973007] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:47.248 [2024-07-11 23:45:07.973024] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:47.248 [2024-07-11 23:45:07.973036] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:47.248 [2024-07-11 23:45:07.973063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.248 23:45:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:47.248 23:45:08 -- common/autotest_common.sh@852 -- # return 0 00:31:47.248 23:45:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:47.248 23:45:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:47.248 23:45:08 -- common/autotest_common.sh@10 -- # set +x 00:31:47.248 23:45:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:47.248 23:45:08 -- host/digest.sh@120 -- # common_target_config 00:31:47.248 23:45:08 -- host/digest.sh@43 -- # rpc_cmd 00:31:47.248 23:45:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:47.248 23:45:08 -- common/autotest_common.sh@10 -- # set +x 00:31:47.248 null0 00:31:47.248 [2024-07-11 23:45:08.165035] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:47.248 [2024-07-11 23:45:08.189282] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.248 23:45:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.248 23:45:08 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:31:47.248 23:45:08 -- host/digest.sh@77 -- # local rw bs qd 00:31:47.248 23:45:08 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:47.248 23:45:08 -- host/digest.sh@80 -- # rw=randread 00:31:47.248 23:45:08 -- host/digest.sh@80 -- # bs=4096 00:31:47.248 23:45:08 -- host/digest.sh@80 -- # qd=128 00:31:47.248 23:45:08 -- host/digest.sh@82 -- # bperfpid=380249 00:31:47.248 23:45:08 -- host/digest.sh@83 -- # waitforlisten 380249 /var/tmp/bperf.sock 00:31:47.248 23:45:08 -- common/autotest_common.sh@819 -- # '[' -z 380249 ']' 00:31:47.248 23:45:08 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:47.248 23:45:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:47.248 23:45:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:47.248 23:45:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:47.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:47.248 23:45:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:47.248 23:45:08 -- common/autotest_common.sh@10 -- # set +x 00:31:47.506 [2024-07-11 23:45:08.252526] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:47.506 [2024-07-11 23:45:08.252621] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380249 ] 00:31:47.506 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.506 [2024-07-11 23:45:08.328997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.506 [2024-07-11 23:45:08.421700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.764 23:45:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:47.764 23:45:08 -- common/autotest_common.sh@852 -- # return 0 00:31:47.764 23:45:08 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:47.764 23:45:08 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:47.764 23:45:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:48.022 23:45:08 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:48.022 23:45:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:48.588 nvme0n1 00:31:48.589 23:45:09 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:48.589 23:45:09 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:48.846 Running I/O for 2 seconds... 00:31:50.746 00:31:50.746 Latency(us) 00:31:50.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.747 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:50.747 nvme0n1 : 2.00 17957.20 70.15 0.00 0.00 7119.52 2463.67 18058.81 00:31:50.747 =================================================================================================================== 00:31:50.747 Total : 17957.20 70.15 0.00 0.00 7119.52 2463.67 18058.81 00:31:50.747 0 00:31:50.747 23:45:11 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:50.747 23:45:11 -- host/digest.sh@92 -- # get_accel_stats 00:31:50.747 23:45:11 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:50.747 23:45:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:50.747 23:45:11 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:50.747 | select(.opcode=="crc32c") 00:31:50.747 | "\(.module_name) \(.executed)"' 00:31:51.004 23:45:11 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:51.004 23:45:11 -- host/digest.sh@93 -- # exp_module=software 00:31:51.004 23:45:11 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:51.004 23:45:11 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:51.004 23:45:11 -- host/digest.sh@97 -- # killprocess 380249 00:31:51.004 23:45:11 -- common/autotest_common.sh@926 -- # '[' -z 380249 ']' 00:31:51.004 23:45:11 -- common/autotest_common.sh@930 -- # kill -0 380249 00:31:51.004 23:45:11 -- common/autotest_common.sh@931 -- # uname 00:31:51.004 23:45:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:51.004 23:45:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 380249 00:31:51.004 23:45:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:51.004 23:45:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:51.004 23:45:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 380249' 00:31:51.004 killing process with pid 380249 00:31:51.004 23:45:11 -- common/autotest_common.sh@945 -- # kill 380249 00:31:51.004 Received shutdown signal, test time was about 2.000000 seconds 00:31:51.004 00:31:51.004 Latency(us) 00:31:51.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.004 =================================================================================================================== 00:31:51.004 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:51.004 23:45:11 -- common/autotest_common.sh@950 -- # wait 380249 00:31:51.261 23:45:12 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:31:51.261 23:45:12 -- host/digest.sh@77 -- # local rw bs qd 00:31:51.261 23:45:12 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:51.261 23:45:12 -- host/digest.sh@80 -- # rw=randread 00:31:51.261 23:45:12 -- host/digest.sh@80 -- # bs=131072 00:31:51.261 23:45:12 -- host/digest.sh@80 -- # qd=16 00:31:51.261 23:45:12 -- host/digest.sh@82 -- # bperfpid=380990 00:31:51.261 23:45:12 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:51.261 23:45:12 -- host/digest.sh@83 -- # waitforlisten 380990 /var/tmp/bperf.sock 00:31:51.261 23:45:12 -- common/autotest_common.sh@819 -- # '[' -z 380990 ']' 00:31:51.261 23:45:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:51.261 23:45:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:51.262 23:45:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:51.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:51.262 23:45:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:51.262 23:45:12 -- common/autotest_common.sh@10 -- # set +x 00:31:51.262 [2024-07-11 23:45:12.202389] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:51.262 [2024-07-11 23:45:12.202481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380990 ] 00:31:51.262 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:51.262 Zero copy mechanism will not be used. 00:31:51.540 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.540 [2024-07-11 23:45:12.291854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.541 [2024-07-11 23:45:12.384078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.805 23:45:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:51.805 23:45:12 -- common/autotest_common.sh@852 -- # return 0 00:31:51.805 23:45:12 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:51.805 23:45:12 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:51.805 23:45:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:52.370 23:45:13 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:52.370 23:45:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:53.301 nvme0n1 00:31:53.301 23:45:13 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:53.301 23:45:13 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:53.301 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:53.301 Zero copy mechanism will not be used. 00:31:53.301 Running I/O for 2 seconds... 00:31:55.828 00:31:55.828 Latency(us) 00:31:55.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.828 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:55.828 nvme0n1 : 2.00 2255.37 281.92 0.00 0.00 7090.20 5922.51 12233.39 00:31:55.828 =================================================================================================================== 00:31:55.828 Total : 2255.37 281.92 0.00 0.00 7090.20 5922.51 12233.39 00:31:55.828 0 00:31:55.828 23:45:16 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:55.828 23:45:16 -- host/digest.sh@92 -- # get_accel_stats 00:31:55.828 23:45:16 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:55.828 23:45:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:55.828 23:45:16 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:55.828 | select(.opcode=="crc32c") 00:31:55.828 | "\(.module_name) \(.executed)"' 00:31:55.828 23:45:16 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:55.828 23:45:16 -- host/digest.sh@93 -- # exp_module=software 00:31:55.828 23:45:16 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:55.828 23:45:16 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:55.828 23:45:16 -- host/digest.sh@97 -- # killprocess 380990 00:31:55.828 23:45:16 -- common/autotest_common.sh@926 -- # '[' -z 380990 ']' 00:31:55.828 23:45:16 -- common/autotest_common.sh@930 -- # kill -0 380990 00:31:55.828 23:45:16 -- common/autotest_common.sh@931 -- # uname 00:31:55.828 23:45:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:55.828 23:45:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 380990 00:31:55.828 23:45:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:55.828 23:45:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:55.828 23:45:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 380990' 00:31:55.828 killing process with pid 380990 00:31:55.828 23:45:16 -- common/autotest_common.sh@945 -- # kill 380990 00:31:55.828 Received shutdown signal, test time was about 2.000000 seconds 00:31:55.828 00:31:55.828 Latency(us) 00:31:55.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.828 =================================================================================================================== 00:31:55.828 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:55.828 23:45:16 -- common/autotest_common.sh@950 -- # wait 380990 00:31:55.828 23:45:16 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:31:55.828 23:45:16 -- host/digest.sh@77 -- # local rw bs qd 00:31:55.828 23:45:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:55.828 23:45:16 -- host/digest.sh@80 -- # rw=randwrite 00:31:55.828 23:45:16 -- host/digest.sh@80 -- # bs=4096 00:31:55.828 23:45:16 -- host/digest.sh@80 -- # qd=128 00:31:55.828 23:45:16 -- host/digest.sh@82 -- # bperfpid=381540 00:31:55.828 23:45:16 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:55.828 23:45:16 -- host/digest.sh@83 -- # waitforlisten 381540 /var/tmp/bperf.sock 00:31:55.828 23:45:16 -- common/autotest_common.sh@819 -- # '[' -z 381540 ']' 00:31:55.828 23:45:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:55.829 23:45:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:55.829 23:45:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:55.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:55.829 23:45:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:55.829 23:45:16 -- common/autotest_common.sh@10 -- # set +x 00:31:55.829 [2024-07-11 23:45:16.748026] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:55.829 [2024-07-11 23:45:16.748115] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid381540 ] 00:31:56.086 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.086 [2024-07-11 23:45:16.817857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.086 [2024-07-11 23:45:16.906380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.086 23:45:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:56.086 23:45:16 -- common/autotest_common.sh@852 -- # return 0 00:31:56.086 23:45:16 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:56.086 23:45:16 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:56.086 23:45:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:56.652 23:45:17 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:56.652 23:45:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:57.583 nvme0n1 00:31:57.583 23:45:18 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:57.583 23:45:18 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:57.583 Running I/O for 2 seconds... 00:31:59.479 00:31:59.479 Latency(us) 00:31:59.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.479 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:59.479 nvme0n1 : 2.00 20445.38 79.86 0.00 0.00 6253.14 2985.53 18835.53 00:31:59.479 =================================================================================================================== 00:31:59.479 Total : 20445.38 79.86 0.00 0.00 6253.14 2985.53 18835.53 00:31:59.479 0 00:31:59.736 23:45:20 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:59.736 23:45:20 -- host/digest.sh@92 -- # get_accel_stats 00:31:59.736 23:45:20 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:59.736 23:45:20 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:59.736 | select(.opcode=="crc32c") 00:31:59.736 | "\(.module_name) \(.executed)"' 00:31:59.736 23:45:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:59.994 23:45:20 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:59.994 23:45:20 -- host/digest.sh@93 -- # exp_module=software 00:31:59.994 23:45:20 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:59.994 23:45:20 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:59.994 23:45:20 -- host/digest.sh@97 -- # killprocess 381540 00:31:59.994 23:45:20 -- common/autotest_common.sh@926 -- # '[' -z 381540 ']' 00:31:59.994 23:45:20 -- common/autotest_common.sh@930 -- # kill -0 381540 00:31:59.994 23:45:20 -- common/autotest_common.sh@931 -- # uname 00:31:59.994 23:45:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:59.994 23:45:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 381540 00:31:59.994 23:45:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:59.994 23:45:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:59.994 23:45:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 381540' 00:31:59.994 killing process with pid 381540 00:31:59.994 23:45:20 -- common/autotest_common.sh@945 -- # kill 381540 00:31:59.994 Received shutdown signal, test time was about 2.000000 seconds 00:31:59.994 00:31:59.994 Latency(us) 00:31:59.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.994 =================================================================================================================== 00:31:59.994 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:59.994 23:45:20 -- common/autotest_common.sh@950 -- # wait 381540 00:32:00.252 23:45:20 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:32:00.252 23:45:20 -- host/digest.sh@77 -- # local rw bs qd 00:32:00.252 23:45:20 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:00.252 23:45:20 -- host/digest.sh@80 -- # rw=randwrite 00:32:00.252 23:45:20 -- host/digest.sh@80 -- # bs=131072 00:32:00.252 23:45:20 -- host/digest.sh@80 -- # qd=16 00:32:00.252 23:45:20 -- host/digest.sh@82 -- # bperfpid=382083 00:32:00.252 23:45:20 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:00.252 23:45:20 -- host/digest.sh@83 -- # waitforlisten 382083 /var/tmp/bperf.sock 00:32:00.252 23:45:20 -- common/autotest_common.sh@819 -- # '[' -z 382083 ']' 00:32:00.252 23:45:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:00.252 23:45:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:00.252 23:45:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:00.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:00.252 23:45:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:00.252 23:45:20 -- common/autotest_common.sh@10 -- # set +x 00:32:00.252 [2024-07-11 23:45:21.075130] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:00.252 [2024-07-11 23:45:21.075311] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid382083 ] 00:32:00.252 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:00.252 Zero copy mechanism will not be used. 00:32:00.252 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.252 [2024-07-11 23:45:21.173093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.510 [2024-07-11 23:45:21.268659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.442 23:45:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:01.442 23:45:22 -- common/autotest_common.sh@852 -- # return 0 00:32:01.442 23:45:22 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:01.442 23:45:22 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:01.442 23:45:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:02.008 23:45:22 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:02.008 23:45:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:02.572 nvme0n1 00:32:02.572 23:45:23 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:02.572 23:45:23 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:02.829 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:02.829 Zero copy mechanism will not be used. 00:32:02.829 Running I/O for 2 seconds... 00:32:04.728 00:32:04.728 Latency(us) 00:32:04.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.728 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:04.728 nvme0n1 : 2.01 2031.28 253.91 0.00 0.00 7856.06 4805.97 20097.71 00:32:04.728 =================================================================================================================== 00:32:04.728 Total : 2031.28 253.91 0.00 0.00 7856.06 4805.97 20097.71 00:32:04.728 0 00:32:04.728 23:45:25 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:04.728 23:45:25 -- host/digest.sh@92 -- # get_accel_stats 00:32:04.728 23:45:25 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:04.728 23:45:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:04.728 23:45:25 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:04.728 | select(.opcode=="crc32c") 00:32:04.728 | "\(.module_name) \(.executed)"' 00:32:05.293 23:45:26 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:05.294 23:45:26 -- host/digest.sh@93 -- # exp_module=software 00:32:05.294 23:45:26 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:05.294 23:45:26 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:05.294 23:45:26 -- host/digest.sh@97 -- # killprocess 382083 00:32:05.294 23:45:26 -- common/autotest_common.sh@926 -- # '[' -z 382083 ']' 00:32:05.294 23:45:26 -- common/autotest_common.sh@930 -- # kill -0 382083 00:32:05.294 23:45:26 -- common/autotest_common.sh@931 -- # uname 00:32:05.294 23:45:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:05.294 23:45:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 382083 00:32:05.294 23:45:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:05.294 23:45:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:05.294 23:45:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 382083' 00:32:05.294 killing process with pid 382083 00:32:05.294 23:45:26 -- common/autotest_common.sh@945 -- # kill 382083 00:32:05.294 Received shutdown signal, test time was about 2.000000 seconds 00:32:05.294 00:32:05.294 Latency(us) 00:32:05.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.294 =================================================================================================================== 00:32:05.294 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:05.294 23:45:26 -- common/autotest_common.sh@950 -- # wait 382083 00:32:05.551 23:45:26 -- host/digest.sh@126 -- # killprocess 379943 00:32:05.551 23:45:26 -- common/autotest_common.sh@926 -- # '[' -z 379943 ']' 00:32:05.551 23:45:26 -- common/autotest_common.sh@930 -- # kill -0 379943 00:32:05.551 23:45:26 -- common/autotest_common.sh@931 -- # uname 00:32:05.551 23:45:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:05.551 23:45:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 379943 00:32:05.551 23:45:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:05.551 23:45:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:05.551 23:45:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 379943' 00:32:05.551 killing process with pid 379943 00:32:05.551 23:45:26 -- common/autotest_common.sh@945 -- # kill 379943 00:32:05.551 23:45:26 -- common/autotest_common.sh@950 -- # wait 379943 00:32:05.809 00:32:05.809 real 0m18.902s 00:32:05.809 user 0m39.678s 00:32:05.809 sys 0m4.933s 00:32:05.809 23:45:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:05.809 23:45:26 -- common/autotest_common.sh@10 -- # set +x 00:32:05.809 ************************************ 00:32:05.809 END TEST nvmf_digest_clean 00:32:05.809 ************************************ 00:32:05.809 23:45:26 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:32:05.809 23:45:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:05.809 23:45:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:05.809 23:45:26 -- common/autotest_common.sh@10 -- # set +x 00:32:05.809 ************************************ 00:32:05.809 START TEST nvmf_digest_error 00:32:05.809 ************************************ 00:32:05.809 23:45:26 -- common/autotest_common.sh@1104 -- # run_digest_error 00:32:05.809 23:45:26 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:32:05.809 23:45:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:05.809 23:45:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:05.809 23:45:26 -- common/autotest_common.sh@10 -- # set +x 00:32:05.809 23:45:26 -- nvmf/common.sh@469 -- # nvmfpid=382794 00:32:05.809 23:45:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:05.809 23:45:26 -- nvmf/common.sh@470 -- # waitforlisten 382794 00:32:05.809 23:45:26 -- common/autotest_common.sh@819 -- # '[' -z 382794 ']' 00:32:05.809 23:45:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.809 23:45:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:05.809 23:45:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.809 23:45:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:05.809 23:45:26 -- common/autotest_common.sh@10 -- # set +x 00:32:06.067 [2024-07-11 23:45:26.760277] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:06.067 [2024-07-11 23:45:26.760373] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.067 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.067 [2024-07-11 23:45:26.846401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.067 [2024-07-11 23:45:26.940927] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:06.067 [2024-07-11 23:45:26.941101] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.067 [2024-07-11 23:45:26.941121] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.067 [2024-07-11 23:45:26.941135] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.067 [2024-07-11 23:45:26.941178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.067 23:45:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:06.067 23:45:27 -- common/autotest_common.sh@852 -- # return 0 00:32:06.067 23:45:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:06.067 23:45:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:06.067 23:45:27 -- common/autotest_common.sh@10 -- # set +x 00:32:06.325 23:45:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.325 23:45:27 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:06.325 23:45:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:06.325 23:45:27 -- common/autotest_common.sh@10 -- # set +x 00:32:06.325 [2024-07-11 23:45:27.033840] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:06.325 23:45:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:06.325 23:45:27 -- host/digest.sh@104 -- # common_target_config 00:32:06.325 23:45:27 -- host/digest.sh@43 -- # rpc_cmd 00:32:06.325 23:45:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:06.325 23:45:27 -- common/autotest_common.sh@10 -- # set +x 00:32:06.325 null0 00:32:06.325 [2024-07-11 23:45:27.156775] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.325 [2024-07-11 23:45:27.181058] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.325 23:45:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:06.325 23:45:27 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:32:06.325 23:45:27 -- host/digest.sh@54 -- # local rw bs qd 00:32:06.325 23:45:27 -- host/digest.sh@56 -- # rw=randread 00:32:06.325 23:45:27 -- host/digest.sh@56 -- # bs=4096 00:32:06.325 23:45:27 -- host/digest.sh@56 -- # qd=128 00:32:06.325 23:45:27 -- host/digest.sh@58 -- # bperfpid=382818 00:32:06.325 23:45:27 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:06.325 23:45:27 -- host/digest.sh@60 -- # waitforlisten 382818 /var/tmp/bperf.sock 00:32:06.325 23:45:27 -- common/autotest_common.sh@819 -- # '[' -z 382818 ']' 00:32:06.325 23:45:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:06.325 23:45:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:06.325 23:45:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:06.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:06.325 23:45:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:06.325 23:45:27 -- common/autotest_common.sh@10 -- # set +x 00:32:06.325 [2024-07-11 23:45:27.227864] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:06.325 [2024-07-11 23:45:27.227942] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid382818 ] 00:32:06.325 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.583 [2024-07-11 23:45:27.295616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.583 [2024-07-11 23:45:27.387653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.583 23:45:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:06.583 23:45:27 -- common/autotest_common.sh@852 -- # return 0 00:32:06.583 23:45:27 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:06.583 23:45:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:07.148 23:45:27 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:07.148 23:45:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:07.148 23:45:27 -- common/autotest_common.sh@10 -- # set +x 00:32:07.148 23:45:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:07.148 23:45:27 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:07.148 23:45:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:07.405 nvme0n1 00:32:07.405 23:45:28 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:07.405 23:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:07.405 23:45:28 -- common/autotest_common.sh@10 -- # set +x 00:32:07.405 23:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:07.405 23:45:28 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:07.406 23:45:28 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:07.663 Running I/O for 2 seconds... 00:32:07.663 [2024-07-11 23:45:28.460796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.663 [2024-07-11 23:45:28.460849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.663 [2024-07-11 23:45:28.460873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.663 [2024-07-11 23:45:28.480024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.663 [2024-07-11 23:45:28.480060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.663 [2024-07-11 23:45:28.480080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.663 [2024-07-11 23:45:28.497538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.663 [2024-07-11 23:45:28.497574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.663 [2024-07-11 23:45:28.497593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.663 [2024-07-11 23:45:28.510292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.663 [2024-07-11 23:45:28.510326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.663 [2024-07-11 23:45:28.510347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.663 [2024-07-11 23:45:28.522951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.663 [2024-07-11 23:45:28.523002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.663 [2024-07-11 23:45:28.523022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.663 [2024-07-11 23:45:28.535477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.663 [2024-07-11 23:45:28.535512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.664 [2024-07-11 23:45:28.535532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.664 [2024-07-11 23:45:28.547520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.664 [2024-07-11 23:45:28.547555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.664 [2024-07-11 23:45:28.547574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.664 [2024-07-11 23:45:28.560701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.664 [2024-07-11 23:45:28.560736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.664 [2024-07-11 23:45:28.560755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.664 [2024-07-11 23:45:28.573074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.664 [2024-07-11 23:45:28.573109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.664 [2024-07-11 23:45:28.573129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.664 [2024-07-11 23:45:28.585386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.664 [2024-07-11 23:45:28.585421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.664 [2024-07-11 23:45:28.585440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.664 [2024-07-11 23:45:28.598477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.664 [2024-07-11 23:45:28.598511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.664 [2024-07-11 23:45:28.598530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.664 [2024-07-11 23:45:28.610861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.664 [2024-07-11 23:45:28.610895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.664 [2024-07-11 23:45:28.610915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.922 [2024-07-11 23:45:28.623046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.922 [2024-07-11 23:45:28.623082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.922 [2024-07-11 23:45:28.623101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.922 [2024-07-11 23:45:28.635273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.922 [2024-07-11 23:45:28.635307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.922 [2024-07-11 23:45:28.635325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.922 [2024-07-11 23:45:28.648290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.922 [2024-07-11 23:45:28.648325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.922 [2024-07-11 23:45:28.648344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.922 [2024-07-11 23:45:28.660680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.922 [2024-07-11 23:45:28.660715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.922 [2024-07-11 23:45:28.660734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.922 [2024-07-11 23:45:28.672877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.922 [2024-07-11 23:45:28.672912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.922 [2024-07-11 23:45:28.672931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.922 [2024-07-11 23:45:28.685923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.922 [2024-07-11 23:45:28.685958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.922 [2024-07-11 23:45:28.685977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.922 [2024-07-11 23:45:28.697896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.922 [2024-07-11 23:45:28.697930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.922 [2024-07-11 23:45:28.697959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.922 [2024-07-11 23:45:28.710488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.922 [2024-07-11 23:45:28.710522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.922 [2024-07-11 23:45:28.710541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.922 [2024-07-11 23:45:28.722957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.922 [2024-07-11 23:45:28.722992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.922 [2024-07-11 23:45:28.723011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.922 [2024-07-11 23:45:28.736039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.922 [2024-07-11 23:45:28.736073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.922 [2024-07-11 23:45:28.736105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.922 [2024-07-11 23:45:28.748382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.923 [2024-07-11 23:45:28.748422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.923 [2024-07-11 23:45:28.748441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.923 [2024-07-11 23:45:28.760601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.923 [2024-07-11 23:45:28.760637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.923 [2024-07-11 23:45:28.760657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.923 [2024-07-11 23:45:28.773877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.923 [2024-07-11 23:45:28.773912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.923 [2024-07-11 23:45:28.773932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.923 [2024-07-11 23:45:28.785790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.923 [2024-07-11 23:45:28.785824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.923 [2024-07-11 23:45:28.785843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.923 [2024-07-11 23:45:28.798198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.923 [2024-07-11 23:45:28.798232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.923 [2024-07-11 23:45:28.798252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.923 [2024-07-11 23:45:28.811176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.923 [2024-07-11 23:45:28.811215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.923 [2024-07-11 23:45:28.811234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.923 [2024-07-11 23:45:28.823456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.923 [2024-07-11 23:45:28.823491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.923 [2024-07-11 23:45:28.823510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.923 [2024-07-11 23:45:28.835912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.923 [2024-07-11 23:45:28.835947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.923 [2024-07-11 23:45:28.835966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.923 [2024-07-11 23:45:28.848860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.923 [2024-07-11 23:45:28.848893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.923 [2024-07-11 23:45:28.848912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:07.923 [2024-07-11 23:45:28.860938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:07.923 [2024-07-11 23:45:28.860972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.923 [2024-07-11 23:45:28.860991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.181 [2024-07-11 23:45:28.873338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.181 [2024-07-11 23:45:28.873371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.181 [2024-07-11 23:45:28.873390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.181 [2024-07-11 23:45:28.886333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.181 [2024-07-11 23:45:28.886367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.181 [2024-07-11 23:45:28.886386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.181 [2024-07-11 23:45:28.898736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.181 [2024-07-11 23:45:28.898770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.181 [2024-07-11 23:45:28.898789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.181 [2024-07-11 23:45:28.910983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.181 [2024-07-11 23:45:28.911017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.181 [2024-07-11 23:45:28.911036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.181 [2024-07-11 23:45:28.923848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.181 [2024-07-11 23:45:28.923882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.181 [2024-07-11 23:45:28.923901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.181 [2024-07-11 23:45:28.936106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.181 [2024-07-11 23:45:28.936148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:28.936170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:28.948425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:28.948469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:28.948495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:28.960587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:28.960621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:28.960640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:28.973715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:28.973747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:28.973767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:28.986102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:28.986136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:28.986164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:28.998304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:28.998337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:28.998356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:29.011477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:29.011510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:29.011528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:29.023921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:29.023954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:29.023974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:29.036044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:29.036087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:29.036107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:29.048224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:29.048257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:29.048277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:29.061373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:29.061413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:29.061434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:29.073840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:29.073874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:29.073892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:29.086019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:29.086052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:29.086070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:29.098194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:29.098226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:29.098245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:29.111306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:29.111338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:29.111357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.182 [2024-07-11 23:45:29.123759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.182 [2024-07-11 23:45:29.123793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.182 [2024-07-11 23:45:29.123812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.440 [2024-07-11 23:45:29.135960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.440 [2024-07-11 23:45:29.135993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.440 [2024-07-11 23:45:29.136012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.440 [2024-07-11 23:45:29.148120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.440 [2024-07-11 23:45:29.148162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.440 [2024-07-11 23:45:29.148182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.440 [2024-07-11 23:45:29.161233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.440 [2024-07-11 23:45:29.161265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.440 [2024-07-11 23:45:29.161284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.440 [2024-07-11 23:45:29.173664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.440 [2024-07-11 23:45:29.173698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.440 [2024-07-11 23:45:29.173717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.440 [2024-07-11 23:45:29.185873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.440 [2024-07-11 23:45:29.185906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.440 [2024-07-11 23:45:29.185925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.440 [2024-07-11 23:45:29.198026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.440 [2024-07-11 23:45:29.198059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.440 [2024-07-11 23:45:29.198078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.211151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.211185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.211204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.223443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.223478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.223497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.235337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.235370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.235389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.248812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.248845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.248864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.261266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.261300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.261319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.273293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.273326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.273351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.286472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.286505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.286524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.298818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.298852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.298871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.311072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.311105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.311124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.323280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.323313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.323332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.336388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.336422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.336442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.348728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.348763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.348783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.360654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.360688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.360707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.373718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.373751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.373770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.441 [2024-07-11 23:45:29.386192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.441 [2024-07-11 23:45:29.386231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.441 [2024-07-11 23:45:29.386263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.398334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.398369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.398389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.411511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.411554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.411573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.423962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.423996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.424016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.435930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.435964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.435983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.449410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.449444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.449463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.460985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.461019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.461038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.474691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.474726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.474745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.487025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.487058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.487083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.504012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.504046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.504065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.518884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.518918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.518937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.529250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.529284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.529303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.544896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.544930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.544950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.563627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.563661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.563680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.580297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.580332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.580352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.591853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.591887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.591906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.699 [2024-07-11 23:45:29.608437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.699 [2024-07-11 23:45:29.608472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.699 [2024-07-11 23:45:29.608491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.700 [2024-07-11 23:45:29.625821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.700 [2024-07-11 23:45:29.625861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.700 [2024-07-11 23:45:29.625882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.700 [2024-07-11 23:45:29.637959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.700 [2024-07-11 23:45:29.637993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.700 [2024-07-11 23:45:29.638013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.652285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.652319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.652338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.666775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.666808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.666827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.679137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.679177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.679196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.697355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.697388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.697407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.717147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.717180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.717199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.736884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.736917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.736936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.749467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.749501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.749520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.768567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.768600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.768620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.788319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.788360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.788380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.806482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.806516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.806534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.818239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.818273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.818292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.832348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.832382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.832401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.844328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.844362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.844382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.856905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.856939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.856958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.868960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.868995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.869013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.882168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.882220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.882246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.894716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.894749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.894769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.958 [2024-07-11 23:45:29.906861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:08.958 [2024-07-11 23:45:29.906895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.958 [2024-07-11 23:45:29.906914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.217 [2024-07-11 23:45:29.919917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.217 [2024-07-11 23:45:29.919951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.217 [2024-07-11 23:45:29.919970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.217 [2024-07-11 23:45:29.932190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.217 [2024-07-11 23:45:29.932223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.217 [2024-07-11 23:45:29.932241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.217 [2024-07-11 23:45:29.944432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.217 [2024-07-11 23:45:29.944475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.217 [2024-07-11 23:45:29.944494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.217 [2024-07-11 23:45:29.956758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.217 [2024-07-11 23:45:29.956792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.217 [2024-07-11 23:45:29.956812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.217 [2024-07-11 23:45:29.969867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.217 [2024-07-11 23:45:29.969912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.217 [2024-07-11 23:45:29.969931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.217 [2024-07-11 23:45:29.982151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.217 [2024-07-11 23:45:29.982185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.217 [2024-07-11 23:45:29.982204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.217 [2024-07-11 23:45:29.994557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.217 [2024-07-11 23:45:29.994607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.217 [2024-07-11 23:45:29.994627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.217 [2024-07-11 23:45:30.007954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.217 [2024-07-11 23:45:30.008013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.217 [2024-07-11 23:45:30.008041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.217 [2024-07-11 23:45:30.021430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.217 [2024-07-11 23:45:30.021468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.217 [2024-07-11 23:45:30.021489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.217 [2024-07-11 23:45:30.033756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.217 [2024-07-11 23:45:30.033791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.217 [2024-07-11 23:45:30.033811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.217 [2024-07-11 23:45:30.047158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.217 [2024-07-11 23:45:30.047192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.217 [2024-07-11 23:45:30.047213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.217 [2024-07-11 23:45:30.059111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.217 [2024-07-11 23:45:30.059152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.217 [2024-07-11 23:45:30.059173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.217 [2024-07-11 23:45:30.072267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.217 [2024-07-11 23:45:30.072301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.217 [2024-07-11 23:45:30.072321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.217 [2024-07-11 23:45:30.085027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.217 [2024-07-11 23:45:30.085061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.218 [2024-07-11 23:45:30.085080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.218 [2024-07-11 23:45:30.097278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.218 [2024-07-11 23:45:30.097312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.218 [2024-07-11 23:45:30.097331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.218 [2024-07-11 23:45:30.109676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.218 [2024-07-11 23:45:30.109709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.218 [2024-07-11 23:45:30.109729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.218 [2024-07-11 23:45:30.122542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.218 [2024-07-11 23:45:30.122576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.218 [2024-07-11 23:45:30.122595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.218 [2024-07-11 23:45:30.134765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.218 [2024-07-11 23:45:30.134799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.218 [2024-07-11 23:45:30.134818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.218 [2024-07-11 23:45:30.147932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.218 [2024-07-11 23:45:30.147966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.218 [2024-07-11 23:45:30.147985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.218 [2024-07-11 23:45:30.160371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.218 [2024-07-11 23:45:30.160405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.218 [2024-07-11 23:45:30.160425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.172528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.172561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.172580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.184649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.184682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.184701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.197783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.197815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.197835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.210215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.210248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.210273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.222348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.222382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.222400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.234523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.234558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.234577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.247544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.247578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.247598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.260007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.260040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.260059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.272501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.272535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.272554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.285459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.285493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.285512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.297678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.297712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.297731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.310002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.310036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.310056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.323086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.323120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.323147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.335464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.335497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.335516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.347568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.347602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.347621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.359752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.359787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.359807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.372926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.372960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.372979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.385208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.385242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.385262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.397588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.397621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.397640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.410518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.410551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.410570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.476 [2024-07-11 23:45:30.422965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.476 [2024-07-11 23:45:30.422999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.476 [2024-07-11 23:45:30.423025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.734 [2024-07-11 23:45:30.435000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.734 [2024-07-11 23:45:30.435033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.734 [2024-07-11 23:45:30.435052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.734 [2024-07-11 23:45:30.445535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208f660) 00:32:09.734 [2024-07-11 23:45:30.445570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.734 [2024-07-11 23:45:30.445589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.734 00:32:09.734 Latency(us) 00:32:09.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.734 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:09.734 nvme0n1 : 2.05 19075.10 74.51 0.00 0.00 6565.53 2463.67 46409.20 00:32:09.734 =================================================================================================================== 00:32:09.734 Total : 19075.10 74.51 0.00 0.00 6565.53 2463.67 46409.20 00:32:09.734 0 00:32:09.734 23:45:30 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:09.734 23:45:30 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:09.734 23:45:30 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:09.734 | .driver_specific 00:32:09.734 | .nvme_error 00:32:09.734 | .status_code 00:32:09.734 | .command_transient_transport_error' 00:32:09.734 23:45:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:09.992 23:45:30 -- host/digest.sh@71 -- # (( 153 > 0 )) 00:32:09.992 23:45:30 -- host/digest.sh@73 -- # killprocess 382818 00:32:09.992 23:45:30 -- common/autotest_common.sh@926 -- # '[' -z 382818 ']' 00:32:09.992 23:45:30 -- common/autotest_common.sh@930 -- # kill -0 382818 00:32:09.992 23:45:30 -- common/autotest_common.sh@931 -- # uname 00:32:09.992 23:45:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:09.992 23:45:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 382818 00:32:09.992 23:45:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:09.992 23:45:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:09.992 23:45:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 382818' 00:32:09.992 killing process with pid 382818 00:32:09.992 23:45:30 -- common/autotest_common.sh@945 -- # kill 382818 00:32:09.992 Received shutdown signal, test time was about 2.000000 seconds 00:32:09.992 00:32:09.992 Latency(us) 00:32:09.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.992 =================================================================================================================== 00:32:09.992 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:09.992 23:45:30 -- common/autotest_common.sh@950 -- # wait 382818 00:32:10.289 23:45:31 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:32:10.289 23:45:31 -- host/digest.sh@54 -- # local rw bs qd 00:32:10.289 23:45:31 -- host/digest.sh@56 -- # rw=randread 00:32:10.289 23:45:31 -- host/digest.sh@56 -- # bs=131072 00:32:10.289 23:45:31 -- host/digest.sh@56 -- # qd=16 00:32:10.289 23:45:31 -- host/digest.sh@58 -- # bperfpid=383339 00:32:10.289 23:45:31 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:10.289 23:45:31 -- host/digest.sh@60 -- # waitforlisten 383339 /var/tmp/bperf.sock 00:32:10.289 23:45:31 -- common/autotest_common.sh@819 -- # '[' -z 383339 ']' 00:32:10.289 23:45:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:10.289 23:45:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:10.289 23:45:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:10.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:10.289 23:45:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:10.289 23:45:31 -- common/autotest_common.sh@10 -- # set +x 00:32:10.289 [2024-07-11 23:45:31.147232] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:10.289 [2024-07-11 23:45:31.147324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383339 ] 00:32:10.289 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:10.289 Zero copy mechanism will not be used. 00:32:10.289 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.569 [2024-07-11 23:45:31.217699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.569 [2024-07-11 23:45:31.314646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.501 23:45:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:11.501 23:45:32 -- common/autotest_common.sh@852 -- # return 0 00:32:11.501 23:45:32 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:11.501 23:45:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:11.501 23:45:32 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:11.501 23:45:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:11.501 23:45:32 -- common/autotest_common.sh@10 -- # set +x 00:32:11.501 23:45:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:11.501 23:45:32 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:11.501 23:45:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:12.065 nvme0n1 00:32:12.065 23:45:32 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:12.065 23:45:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:12.065 23:45:32 -- common/autotest_common.sh@10 -- # set +x 00:32:12.065 23:45:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:12.065 23:45:32 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:12.065 23:45:32 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:12.322 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:12.322 Zero copy mechanism will not be used. 00:32:12.322 Running I/O for 2 seconds... 00:32:12.322 [2024-07-11 23:45:33.051893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.322 [2024-07-11 23:45:33.051951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.322 [2024-07-11 23:45:33.051975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.322 [2024-07-11 23:45:33.063160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.322 [2024-07-11 23:45:33.063207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.322 [2024-07-11 23:45:33.063227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.322 [2024-07-11 23:45:33.076669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.322 [2024-07-11 23:45:33.076714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.322 [2024-07-11 23:45:33.076734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.322 [2024-07-11 23:45:33.095434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.322 [2024-07-11 23:45:33.095468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.322 [2024-07-11 23:45:33.095487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.322 [2024-07-11 23:45:33.116498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.322 [2024-07-11 23:45:33.116533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.322 [2024-07-11 23:45:33.116551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.322 [2024-07-11 23:45:33.137517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.322 [2024-07-11 23:45:33.137550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.322 [2024-07-11 23:45:33.137569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.322 [2024-07-11 23:45:33.154580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.322 [2024-07-11 23:45:33.154614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.322 [2024-07-11 23:45:33.154633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.322 [2024-07-11 23:45:33.172002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.322 [2024-07-11 23:45:33.172037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.322 [2024-07-11 23:45:33.172055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.322 [2024-07-11 23:45:33.192827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.322 [2024-07-11 23:45:33.192862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.322 [2024-07-11 23:45:33.192880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.322 [2024-07-11 23:45:33.213997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.322 [2024-07-11 23:45:33.214032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.322 [2024-07-11 23:45:33.214050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.322 [2024-07-11 23:45:33.235181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.322 [2024-07-11 23:45:33.235215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.322 [2024-07-11 23:45:33.235235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.322 [2024-07-11 23:45:33.256547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.322 [2024-07-11 23:45:33.256581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.322 [2024-07-11 23:45:33.256599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.579 [2024-07-11 23:45:33.273916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.579 [2024-07-11 23:45:33.273952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.579 [2024-07-11 23:45:33.273971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.579 [2024-07-11 23:45:33.286735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.579 [2024-07-11 23:45:33.286770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.579 [2024-07-11 23:45:33.286788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.579 [2024-07-11 23:45:33.301805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.579 [2024-07-11 23:45:33.301839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.579 [2024-07-11 23:45:33.301857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.579 [2024-07-11 23:45:33.316767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.579 [2024-07-11 23:45:33.316801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.579 [2024-07-11 23:45:33.316820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.579 [2024-07-11 23:45:33.335781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.579 [2024-07-11 23:45:33.335815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.579 [2024-07-11 23:45:33.335834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.579 [2024-07-11 23:45:33.354743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.579 [2024-07-11 23:45:33.354777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.579 [2024-07-11 23:45:33.354795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.579 [2024-07-11 23:45:33.374865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.579 [2024-07-11 23:45:33.374901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.579 [2024-07-11 23:45:33.374920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.579 [2024-07-11 23:45:33.391358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.579 [2024-07-11 23:45:33.391393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.579 [2024-07-11 23:45:33.391422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.580 [2024-07-11 23:45:33.402559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.580 [2024-07-11 23:45:33.402593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.580 [2024-07-11 23:45:33.402612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.580 [2024-07-11 23:45:33.413777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.580 [2024-07-11 23:45:33.413811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.580 [2024-07-11 23:45:33.413829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.580 [2024-07-11 23:45:33.425024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.580 [2024-07-11 23:45:33.425058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.580 [2024-07-11 23:45:33.425076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.580 [2024-07-11 23:45:33.436233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.580 [2024-07-11 23:45:33.436267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.580 [2024-07-11 23:45:33.436286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.580 [2024-07-11 23:45:33.447387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.580 [2024-07-11 23:45:33.447421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.580 [2024-07-11 23:45:33.447439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.580 [2024-07-11 23:45:33.458597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.580 [2024-07-11 23:45:33.458630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.580 [2024-07-11 23:45:33.458649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.580 [2024-07-11 23:45:33.469850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.580 [2024-07-11 23:45:33.469882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.580 [2024-07-11 23:45:33.469900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.580 [2024-07-11 23:45:33.481206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.580 [2024-07-11 23:45:33.481238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.580 [2024-07-11 23:45:33.481257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.580 [2024-07-11 23:45:33.492424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.580 [2024-07-11 23:45:33.492457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.580 [2024-07-11 23:45:33.492475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.580 [2024-07-11 23:45:33.503655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.580 [2024-07-11 23:45:33.503688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.580 [2024-07-11 23:45:33.503706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.580 [2024-07-11 23:45:33.514894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.580 [2024-07-11 23:45:33.514927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.580 [2024-07-11 23:45:33.514946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.580 [2024-07-11 23:45:33.526092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.580 [2024-07-11 23:45:33.526126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.580 [2024-07-11 23:45:33.526151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.537368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.537401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.537419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.548666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.548699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.548718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.559905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.559937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.559956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.571104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.571145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.571166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.582395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.582428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.582452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.593804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.593837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.593856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.605109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.605148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.605168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.616337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.616369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.616387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.627590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.627622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.627640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.639031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.639064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.639083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.650373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.650406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.650425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.661565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.661597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.661615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.672958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.672990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.673008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.684135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.684180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.684200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.695354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.695387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.695405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.706562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.706595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.706613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.717909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.717940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.717959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.729204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.729237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.729256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.740417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.740450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.740469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.751636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.751668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.751687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.762802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.762834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.762852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.774370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.837 [2024-07-11 23:45:33.774401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.837 [2024-07-11 23:45:33.774419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.837 [2024-07-11 23:45:33.785633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:12.838 [2024-07-11 23:45:33.785667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.838 [2024-07-11 23:45:33.785685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.797031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.797064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.797083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.808612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.808645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.808663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.819855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.819889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.819908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.831114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.831154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.831174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.842494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.842528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.842546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.853716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.853749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.853767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.864894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.864928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.864947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.876134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.876174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.876199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.887389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.887421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.887440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.898587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.898620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.898638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.909842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.909874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.909893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.921092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.921124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.921151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.932387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.932420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.932438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.943621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.943654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.943672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.954850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.954882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.095 [2024-07-11 23:45:33.954900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.095 [2024-07-11 23:45:33.966259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.095 [2024-07-11 23:45:33.966291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.096 [2024-07-11 23:45:33.966310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.096 [2024-07-11 23:45:33.977478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.096 [2024-07-11 23:45:33.977515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.096 [2024-07-11 23:45:33.977534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.096 [2024-07-11 23:45:33.988761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.096 [2024-07-11 23:45:33.988794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.096 [2024-07-11 23:45:33.988812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.096 [2024-07-11 23:45:33.999974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.096 [2024-07-11 23:45:34.000006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.096 [2024-07-11 23:45:34.000025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.096 [2024-07-11 23:45:34.011293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.096 [2024-07-11 23:45:34.011326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.096 [2024-07-11 23:45:34.011344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.096 [2024-07-11 23:45:34.022863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.096 [2024-07-11 23:45:34.022898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.096 [2024-07-11 23:45:34.022916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.096 [2024-07-11 23:45:34.034148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.096 [2024-07-11 23:45:34.034180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.096 [2024-07-11 23:45:34.034199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.045414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.045447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.045466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.056795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.056830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.056850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.068257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.068290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.068309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.079589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.079623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.079641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.091231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.091265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.091284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.102613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.102647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.102665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.113826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.113859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.113878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.125204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.125237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.125256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.136502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.136536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.136557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.147836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.147869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.147889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.159193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.159227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.159246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.170460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.170500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.170520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.181704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.181736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.181753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.192986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.193019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.193037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.204274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.204306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.204324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.215470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.215503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.215521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.226647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.226679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.226696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.238054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.238085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.238103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.249342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.249375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.249393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.260536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.260568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.354 [2024-07-11 23:45:34.260586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.354 [2024-07-11 23:45:34.273740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.354 [2024-07-11 23:45:34.273773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.355 [2024-07-11 23:45:34.273792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.355 [2024-07-11 23:45:34.289110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.355 [2024-07-11 23:45:34.289151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.355 [2024-07-11 23:45:34.289171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.304208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.304249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.304267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.320670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.320703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.320721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.339773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.339807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.339826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.360647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.360680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.360699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.379697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.379731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.379749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.396849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.396883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.396902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.417879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.417912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.417937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.438536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.438569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.438588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.455453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.455486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.455504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.470331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.470365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.470383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.484948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.484981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.484999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.499734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.499767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.499785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.514638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.514676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.514694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.528151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.528195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.528213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.540956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.540989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.541007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.613 [2024-07-11 23:45:34.553453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.613 [2024-07-11 23:45:34.553491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.613 [2024-07-11 23:45:34.553511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.871 [2024-07-11 23:45:34.566165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.871 [2024-07-11 23:45:34.566208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.871 [2024-07-11 23:45:34.566226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.871 [2024-07-11 23:45:34.578708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.871 [2024-07-11 23:45:34.578740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.871 [2024-07-11 23:45:34.578758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.871 [2024-07-11 23:45:34.591669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.871 [2024-07-11 23:45:34.591701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.871 [2024-07-11 23:45:34.591719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.871 [2024-07-11 23:45:34.607617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.871 [2024-07-11 23:45:34.607650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.871 [2024-07-11 23:45:34.607668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.871 [2024-07-11 23:45:34.624096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.871 [2024-07-11 23:45:34.624127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.871 [2024-07-11 23:45:34.624156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.871 [2024-07-11 23:45:34.638908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.871 [2024-07-11 23:45:34.638940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.871 [2024-07-11 23:45:34.638959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.871 [2024-07-11 23:45:34.660420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.871 [2024-07-11 23:45:34.660453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.871 [2024-07-11 23:45:34.660471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.871 [2024-07-11 23:45:34.680746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.871 [2024-07-11 23:45:34.680779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.871 [2024-07-11 23:45:34.680798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.871 [2024-07-11 23:45:34.702006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.871 [2024-07-11 23:45:34.702038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.871 [2024-07-11 23:45:34.702057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.871 [2024-07-11 23:45:34.723064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.871 [2024-07-11 23:45:34.723097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.871 [2024-07-11 23:45:34.723116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:13.871 [2024-07-11 23:45:34.744020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.871 [2024-07-11 23:45:34.744053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.871 [2024-07-11 23:45:34.744071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:13.871 [2024-07-11 23:45:34.765727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.871 [2024-07-11 23:45:34.765760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.871 [2024-07-11 23:45:34.765778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:13.871 [2024-07-11 23:45:34.786472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.871 [2024-07-11 23:45:34.786505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.871 [2024-07-11 23:45:34.786524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:13.871 [2024-07-11 23:45:34.807443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:13.871 [2024-07-11 23:45:34.807476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.871 [2024-07-11 23:45:34.807494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:14.129 [2024-07-11 23:45:34.828545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:14.129 [2024-07-11 23:45:34.828579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.129 [2024-07-11 23:45:34.828597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:14.129 [2024-07-11 23:45:34.849930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:14.129 [2024-07-11 23:45:34.849963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.129 [2024-07-11 23:45:34.849981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:14.129 [2024-07-11 23:45:34.870576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:14.129 [2024-07-11 23:45:34.870609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.129 [2024-07-11 23:45:34.870634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.129 [2024-07-11 23:45:34.892118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:14.129 [2024-07-11 23:45:34.892159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.129 [2024-07-11 23:45:34.892179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:14.129 [2024-07-11 23:45:34.913111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:14.129 [2024-07-11 23:45:34.913154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.129 [2024-07-11 23:45:34.913175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:14.129 [2024-07-11 23:45:34.934237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:14.129 [2024-07-11 23:45:34.934272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.129 [2024-07-11 23:45:34.934291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:14.129 [2024-07-11 23:45:34.955378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:14.129 [2024-07-11 23:45:34.955412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.129 [2024-07-11 23:45:34.955430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.129 [2024-07-11 23:45:34.976382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:14.129 [2024-07-11 23:45:34.976415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.129 [2024-07-11 23:45:34.976434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:14.129 [2024-07-11 23:45:34.997999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:14.129 [2024-07-11 23:45:34.998033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.129 [2024-07-11 23:45:34.998051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:14.129 [2024-07-11 23:45:35.019096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:14.129 [2024-07-11 23:45:35.019129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.129 [2024-07-11 23:45:35.019158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:14.129 [2024-07-11 23:45:35.040156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcea580) 00:32:14.129 [2024-07-11 23:45:35.040199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.129 [2024-07-11 23:45:35.040218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.129 00:32:14.129 Latency(us) 00:32:14.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.129 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:14.129 nvme0n1 : 2.00 2166.19 270.77 0.00 0.00 7377.94 5485.61 21456.97 00:32:14.129 =================================================================================================================== 00:32:14.129 Total : 2166.19 270.77 0.00 0.00 7377.94 5485.61 21456.97 00:32:14.129 0 00:32:14.129 23:45:35 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:14.129 23:45:35 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:14.129 23:45:35 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:14.129 | .driver_specific 00:32:14.129 | .nvme_error 00:32:14.129 | .status_code 00:32:14.129 | .command_transient_transport_error' 00:32:14.129 23:45:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:14.694 23:45:35 -- host/digest.sh@71 -- # (( 140 > 0 )) 00:32:14.694 23:45:35 -- host/digest.sh@73 -- # killprocess 383339 00:32:14.694 23:45:35 -- common/autotest_common.sh@926 -- # '[' -z 383339 ']' 00:32:14.694 23:45:35 -- common/autotest_common.sh@930 -- # kill -0 383339 00:32:14.694 23:45:35 -- common/autotest_common.sh@931 -- # uname 00:32:14.694 23:45:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:14.694 23:45:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 383339 00:32:14.694 23:45:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:14.694 23:45:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:14.694 23:45:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 383339' 00:32:14.694 killing process with pid 383339 00:32:14.694 23:45:35 -- common/autotest_common.sh@945 -- # kill 383339 00:32:14.694 Received shutdown signal, test time was about 2.000000 seconds 00:32:14.694 00:32:14.694 Latency(us) 00:32:14.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.694 =================================================================================================================== 00:32:14.694 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:14.694 23:45:35 -- common/autotest_common.sh@950 -- # wait 383339 00:32:14.952 23:45:35 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:32:14.952 23:45:35 -- host/digest.sh@54 -- # local rw bs qd 00:32:14.952 23:45:35 -- host/digest.sh@56 -- # rw=randwrite 00:32:14.952 23:45:35 -- host/digest.sh@56 -- # bs=4096 00:32:14.952 23:45:35 -- host/digest.sh@56 -- # qd=128 00:32:14.952 23:45:35 -- host/digest.sh@58 -- # bperfpid=383840 00:32:14.952 23:45:35 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:14.952 23:45:35 -- host/digest.sh@60 -- # waitforlisten 383840 /var/tmp/bperf.sock 00:32:14.952 23:45:35 -- common/autotest_common.sh@819 -- # '[' -z 383840 ']' 00:32:14.952 23:45:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:14.952 23:45:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:14.952 23:45:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:14.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:14.952 23:45:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:14.952 23:45:35 -- common/autotest_common.sh@10 -- # set +x 00:32:14.952 [2024-07-11 23:45:35.708457] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:14.952 [2024-07-11 23:45:35.708544] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383840 ] 00:32:14.952 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.952 [2024-07-11 23:45:35.778948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.952 [2024-07-11 23:45:35.875526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.885 23:45:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:15.885 23:45:36 -- common/autotest_common.sh@852 -- # return 0 00:32:15.885 23:45:36 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:15.885 23:45:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:16.142 23:45:37 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:16.142 23:45:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:16.142 23:45:37 -- common/autotest_common.sh@10 -- # set +x 00:32:16.142 23:45:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:16.142 23:45:37 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:16.142 23:45:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:16.707 nvme0n1 00:32:16.707 23:45:37 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:16.707 23:45:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:16.707 23:45:37 -- common/autotest_common.sh@10 -- # set +x 00:32:16.707 23:45:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:16.707 23:45:37 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:16.707 23:45:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:16.965 Running I/O for 2 seconds... 00:32:16.965 [2024-07-11 23:45:37.710549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e7c50 00:32:16.965 [2024-07-11 23:45:37.711814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.711862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.723351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e7c50 00:32:16.965 [2024-07-11 23:45:37.724577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.724611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.736058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e7c50 00:32:16.965 [2024-07-11 23:45:37.737316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.737350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.748870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e7c50 00:32:16.965 [2024-07-11 23:45:37.750153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.750186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.761627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e7c50 00:32:16.965 [2024-07-11 23:45:37.762906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.762938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.774455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e7c50 00:32:16.965 [2024-07-11 23:45:37.775757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.775790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.787147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e7c50 00:32:16.965 [2024-07-11 23:45:37.788453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.788486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.799943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e7c50 00:32:16.965 [2024-07-11 23:45:37.801280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.801312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.812663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e7c50 00:32:16.965 [2024-07-11 23:45:37.813987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.814019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.825372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e7c50 00:32:16.965 [2024-07-11 23:45:37.826713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.826745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.837990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e3060 00:32:16.965 [2024-07-11 23:45:37.839354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.839387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.850641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e5a90 00:32:16.965 [2024-07-11 23:45:37.852031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.852063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.863246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ec408 00:32:16.965 [2024-07-11 23:45:37.864631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.864663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.875909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e73e0 00:32:16.965 [2024-07-11 23:45:37.877311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.877342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.888537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e99d8 00:32:16.965 [2024-07-11 23:45:37.889932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.889964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.901135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f0ff8 00:32:16.965 [2024-07-11 23:45:37.902555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.965 [2024-07-11 23:45:37.902588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:16.965 [2024-07-11 23:45:37.913739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f31b8 00:32:16.965 [2024-07-11 23:45:37.915016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.966 [2024-07-11 23:45:37.915050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:37.926406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:17.223 [2024-07-11 23:45:37.927614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:37.927647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:37.939023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:17.223 [2024-07-11 23:45:37.940264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:37.940298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:37.951695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:17.223 [2024-07-11 23:45:37.952924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:37.952956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:37.964359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:17.223 [2024-07-11 23:45:37.965622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:37.965655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:37.977071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:17.223 [2024-07-11 23:45:37.978335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:37.978367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:37.989696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:17.223 [2024-07-11 23:45:37.991181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:37.991219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:38.002348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:17.223 [2024-07-11 23:45:38.003634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:38.003667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:38.014939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:17.223 [2024-07-11 23:45:38.016396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:38.016427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:38.027484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f0bc0 00:32:17.223 [2024-07-11 23:45:38.028967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:38.028998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:38.040154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e5658 00:32:17.223 [2024-07-11 23:45:38.041935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:38.041966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:38.052742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f1430 00:32:17.223 [2024-07-11 23:45:38.054155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:38.054186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:38.065237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f1430 00:32:17.223 [2024-07-11 23:45:38.067164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:38.067195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:38.077743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e7c50 00:32:17.223 [2024-07-11 23:45:38.079103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:38.079134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:38.090393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e5658 00:32:17.223 [2024-07-11 23:45:38.091809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:38.091840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:38.102535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e7818 00:32:17.223 [2024-07-11 23:45:38.103259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:38.103289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:38.115084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ed920 00:32:17.223 [2024-07-11 23:45:38.115556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.223 [2024-07-11 23:45:38.115587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:17.223 [2024-07-11 23:45:38.129718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190eff18 00:32:17.223 [2024-07-11 23:45:38.131586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.224 [2024-07-11 23:45:38.131617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.224 [2024-07-11 23:45:38.142172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190fac10 00:32:17.224 [2024-07-11 23:45:38.143712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.224 [2024-07-11 23:45:38.143743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:17.224 [2024-07-11 23:45:38.154953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ed920 00:32:17.224 [2024-07-11 23:45:38.156750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.224 [2024-07-11 23:45:38.156781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:17.224 [2024-07-11 23:45:38.167614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f0bc0 00:32:17.224 [2024-07-11 23:45:38.169415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.224 [2024-07-11 23:45:38.169446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:17.481 [2024-07-11 23:45:38.179891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e6fa8 00:32:17.481 [2024-07-11 23:45:38.180894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.481 [2024-07-11 23:45:38.180925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:17.481 [2024-07-11 23:45:38.193135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190eaab8 00:32:17.481 [2024-07-11 23:45:38.194518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.481 [2024-07-11 23:45:38.194549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:17.481 [2024-07-11 23:45:38.205967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e9e10 00:32:17.481 [2024-07-11 23:45:38.207592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.481 [2024-07-11 23:45:38.207623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.481 [2024-07-11 23:45:38.216601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ef6a8 00:32:17.481 [2024-07-11 23:45:38.217575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.481 [2024-07-11 23:45:38.217606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:17.481 [2024-07-11 23:45:38.229219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ef6a8 00:32:17.481 [2024-07-11 23:45:38.230227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.230259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.241840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e7c50 00:32:17.482 [2024-07-11 23:45:38.242811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.242841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.254457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f20d8 00:32:17.482 [2024-07-11 23:45:38.255438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.255469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.267079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e4578 00:32:17.482 [2024-07-11 23:45:38.268104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.268136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.279671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e8d30 00:32:17.482 [2024-07-11 23:45:38.280709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.280740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.292262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f3e60 00:32:17.482 [2024-07-11 23:45:38.293367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.293398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.304873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f92c0 00:32:17.482 [2024-07-11 23:45:38.306232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.306275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.317515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f6cc8 00:32:17.482 [2024-07-11 23:45:38.318886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.318923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.330133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f7538 00:32:17.482 [2024-07-11 23:45:38.331081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.331113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.342682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ea680 00:32:17.482 [2024-07-11 23:45:38.344101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.344133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.357200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e5220 00:32:17.482 [2024-07-11 23:45:38.358601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.358633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.369726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f1ca0 00:32:17.482 [2024-07-11 23:45:38.371099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.371130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.382281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f7538 00:32:17.482 [2024-07-11 23:45:38.383608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.383639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.394843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190eb760 00:32:17.482 [2024-07-11 23:45:38.396289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.396320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.407446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190eff18 00:32:17.482 [2024-07-11 23:45:38.408854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.408885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:17.482 [2024-07-11 23:45:38.420007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e84c0 00:32:17.482 [2024-07-11 23:45:38.421379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.482 [2024-07-11 23:45:38.421410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.432586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190feb58 00:32:17.740 [2024-07-11 23:45:38.433778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-11 23:45:38.433810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.445162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e99d8 00:32:17.740 [2024-07-11 23:45:38.446225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-11 23:45:38.446255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.457728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f4298 00:32:17.740 [2024-07-11 23:45:38.458964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-11 23:45:38.458995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.470283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f8e88 00:32:17.740 [2024-07-11 23:45:38.471303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-11 23:45:38.471334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.482846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f6890 00:32:17.740 [2024-07-11 23:45:38.483822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-11 23:45:38.483852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.495405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f92c0 00:32:17.740 [2024-07-11 23:45:38.496315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-11 23:45:38.496347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.508023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ebfd0 00:32:17.740 [2024-07-11 23:45:38.508854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-11 23:45:38.508885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.520603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f1ca0 00:32:17.740 [2024-07-11 23:45:38.522423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-11 23:45:38.522455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.533230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e6300 00:32:17.740 [2024-07-11 23:45:38.534754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-11 23:45:38.534785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.545860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f8e88 00:32:17.740 [2024-07-11 23:45:38.547404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-11 23:45:38.547434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.558411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190fb480 00:32:17.740 [2024-07-11 23:45:38.559914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-11 23:45:38.559944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.571009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f5378 00:32:17.740 [2024-07-11 23:45:38.572580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-11 23:45:38.572610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.583664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190fb8b8 00:32:17.740 [2024-07-11 23:45:38.585240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-11 23:45:38.585271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.596284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f8e88 00:32:17.740 [2024-07-11 23:45:38.597867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.740 [2024-07-11 23:45:38.597897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:17.740 [2024-07-11 23:45:38.608922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ff3c8 00:32:17.740 [2024-07-11 23:45:38.610515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-11 23:45:38.610546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:17.741 [2024-07-11 23:45:38.621540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f2d80 00:32:17.741 [2024-07-11 23:45:38.623165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-11 23:45:38.623195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:17.741 [2024-07-11 23:45:38.634179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ecc78 00:32:17.741 [2024-07-11 23:45:38.635797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-11 23:45:38.635828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:17.741 [2024-07-11 23:45:38.646759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e9168 00:32:17.741 [2024-07-11 23:45:38.648407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-11 23:45:38.648447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:17.741 [2024-07-11 23:45:38.659357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ed4e8 00:32:17.741 [2024-07-11 23:45:38.661010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-11 23:45:38.661041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:17.741 [2024-07-11 23:45:38.671965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f6890 00:32:17.741 [2024-07-11 23:45:38.673640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-11 23:45:38.673670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.741 [2024-07-11 23:45:38.684544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f6890 00:32:17.741 [2024-07-11 23:45:38.686217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.741 [2024-07-11 23:45:38.686249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.695616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190eb760 00:32:17.999 [2024-07-11 23:45:38.696555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.696586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.708273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ed4e8 00:32:17.999 [2024-07-11 23:45:38.709185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.709217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.720871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f1868 00:32:17.999 [2024-07-11 23:45:38.721757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.721789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.733406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190fb480 00:32:17.999 [2024-07-11 23:45:38.734339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.734370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.745956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e6fa8 00:32:17.999 [2024-07-11 23:45:38.746894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.746925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.758487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e99d8 00:32:17.999 [2024-07-11 23:45:38.759505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.759536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.771152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190fac10 00:32:17.999 [2024-07-11 23:45:38.772132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.772170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.783831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190fda78 00:32:17.999 [2024-07-11 23:45:38.784874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.784905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.796448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f0350 00:32:17.999 [2024-07-11 23:45:38.797495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.797526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.809151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e6fa8 00:32:17.999 [2024-07-11 23:45:38.810165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.810195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.821672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190fd640 00:32:17.999 [2024-07-11 23:45:38.822693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.822724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.834206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190eb760 00:32:17.999 [2024-07-11 23:45:38.835232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.835263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.846730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e6b70 00:32:17.999 [2024-07-11 23:45:38.847783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.847815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.859316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f1868 00:32:17.999 [2024-07-11 23:45:38.860458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.860491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.872004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190feb58 00:32:17.999 [2024-07-11 23:45:38.873182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.873215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.884735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190fef90 00:32:17.999 [2024-07-11 23:45:38.885955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.999 [2024-07-11 23:45:38.885987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:17.999 [2024-07-11 23:45:38.897416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190fb048 00:32:18.000 [2024-07-11 23:45:38.898670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.000 [2024-07-11 23:45:38.898702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:18.000 [2024-07-11 23:45:38.910022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f0350 00:32:18.000 [2024-07-11 23:45:38.911318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.000 [2024-07-11 23:45:38.911349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:18.000 [2024-07-11 23:45:38.922599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f0350 00:32:18.000 [2024-07-11 23:45:38.923895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.000 [2024-07-11 23:45:38.923926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:18.000 [2024-07-11 23:45:38.935249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f0350 00:32:18.000 [2024-07-11 23:45:38.936546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.000 [2024-07-11 23:45:38.936577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:18.000 [2024-07-11 23:45:38.947843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f0350 00:32:18.258 [2024-07-11 23:45:38.949200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:38.949231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:38.960161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190fdeb0 00:32:18.258 [2024-07-11 23:45:38.961130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:38.961168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:38.972805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e99d8 00:32:18.258 [2024-07-11 23:45:38.973802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:38.973839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:38.985740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e9e10 00:32:18.258 [2024-07-11 23:45:38.986899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:38.986930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:38.998596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ff3c8 00:32:18.258 [2024-07-11 23:45:38.999421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:38.999453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:39.011445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190fda78 00:32:18.258 [2024-07-11 23:45:39.012516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:39.012548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:39.024091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f96f8 00:32:18.258 [2024-07-11 23:45:39.025173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:39.025205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:39.036724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e4de8 00:32:18.258 [2024-07-11 23:45:39.037811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:39.037842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:39.049289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f1868 00:32:18.258 [2024-07-11 23:45:39.050388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:39.050419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:39.061931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.258 [2024-07-11 23:45:39.062975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:39.063006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:39.074574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.258 [2024-07-11 23:45:39.075681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:39.075712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:39.087202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.258 [2024-07-11 23:45:39.088306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:39.088337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:39.099772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.258 [2024-07-11 23:45:39.100908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:39.100939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:39.112350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.258 [2024-07-11 23:45:39.113536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:39.113567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:39.124900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.258 [2024-07-11 23:45:39.126039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:39.126069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:39.137483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.258 [2024-07-11 23:45:39.138587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:39.138619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:39.150036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.258 [2024-07-11 23:45:39.151169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.258 [2024-07-11 23:45:39.151199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:18.258 [2024-07-11 23:45:39.162627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.259 [2024-07-11 23:45:39.163790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.259 [2024-07-11 23:45:39.163820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.259 [2024-07-11 23:45:39.175171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.259 [2024-07-11 23:45:39.176327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.259 [2024-07-11 23:45:39.176358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:18.259 [2024-07-11 23:45:39.187754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.259 [2024-07-11 23:45:39.188913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.259 [2024-07-11 23:45:39.188944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:18.259 [2024-07-11 23:45:39.200334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.259 [2024-07-11 23:45:39.201568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.259 [2024-07-11 23:45:39.201599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.212917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.517 [2024-07-11 23:45:39.214175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.214205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.225543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.517 [2024-07-11 23:45:39.226834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.226865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.238196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.517 [2024-07-11 23:45:39.239454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.239484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.250840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.517 [2024-07-11 23:45:39.252134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.252173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.263488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.517 [2024-07-11 23:45:39.264818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.264848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.276175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e88f8 00:32:18.517 [2024-07-11 23:45:39.277540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.277571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.288776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e6fa8 00:32:18.517 [2024-07-11 23:45:39.290072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.290104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.301342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f7100 00:32:18.517 [2024-07-11 23:45:39.302759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.302796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.313927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f6890 00:32:18.517 [2024-07-11 23:45:39.315443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.315474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.326528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ebb98 00:32:18.517 [2024-07-11 23:45:39.327817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.327848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.340722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f4298 00:32:18.517 [2024-07-11 23:45:39.342006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.342048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.353306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ebb98 00:32:18.517 [2024-07-11 23:45:39.354538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.354569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.365850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f7100 00:32:18.517 [2024-07-11 23:45:39.366794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.366824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.378677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e5658 00:32:18.517 [2024-07-11 23:45:39.379616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.379652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.390468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190eb760 00:32:18.517 [2024-07-11 23:45:39.392643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.392674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.403868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190fe2e8 00:32:18.517 [2024-07-11 23:45:39.405714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.405746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.416250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f4f40 00:32:18.517 [2024-07-11 23:45:39.417670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.417701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.428913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e5658 00:32:18.517 [2024-07-11 23:45:39.430407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.430438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.441568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ebfd0 00:32:18.517 [2024-07-11 23:45:39.443111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.443159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.454730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f5378 00:32:18.517 [2024-07-11 23:45:39.455995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.517 [2024-07-11 23:45:39.456025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:18.517 [2024-07-11 23:45:39.465719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f2d80 00:32:18.776 [2024-07-11 23:45:39.466978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.776 [2024-07-11 23:45:39.467009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.776 [2024-07-11 23:45:39.480173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ef6a8 00:32:18.776 [2024-07-11 23:45:39.481705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.776 [2024-07-11 23:45:39.481736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.776 [2024-07-11 23:45:39.490982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190fa3a0 00:32:18.776 [2024-07-11 23:45:39.492136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.776 [2024-07-11 23:45:39.492174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:18.776 [2024-07-11 23:45:39.503659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f57b0 00:32:18.776 [2024-07-11 23:45:39.504929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.776 [2024-07-11 23:45:39.504959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:18.776 [2024-07-11 23:45:39.516331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f81e0 00:32:18.776 [2024-07-11 23:45:39.517485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.776 [2024-07-11 23:45:39.517516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:18.776 [2024-07-11 23:45:39.528989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f7da8 00:32:18.776 [2024-07-11 23:45:39.530136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.776 [2024-07-11 23:45:39.530174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:18.776 [2024-07-11 23:45:39.541561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f1430 00:32:18.776 [2024-07-11 23:45:39.542743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.776 [2024-07-11 23:45:39.542773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:18.776 [2024-07-11 23:45:39.555971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e4de8 00:32:18.776 [2024-07-11 23:45:39.557431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.776 [2024-07-11 23:45:39.557472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.776 [2024-07-11 23:45:39.568641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f3e60 00:32:18.776 [2024-07-11 23:45:39.570018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.776 [2024-07-11 23:45:39.570048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.776 [2024-07-11 23:45:39.581251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f2d80 00:32:18.777 [2024-07-11 23:45:39.582682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.777 [2024-07-11 23:45:39.582713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:18.777 [2024-07-11 23:45:39.593830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f5378 00:32:18.777 [2024-07-11 23:45:39.595225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.777 [2024-07-11 23:45:39.595255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.777 [2024-07-11 23:45:39.606453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190fa7d8 00:32:18.777 [2024-07-11 23:45:39.607763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.777 [2024-07-11 23:45:39.607794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:18.777 [2024-07-11 23:45:39.619119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190ff3c8 00:32:18.777 [2024-07-11 23:45:39.620457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.777 [2024-07-11 23:45:39.620488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:18.777 [2024-07-11 23:45:39.631703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e9e10 00:32:18.777 [2024-07-11 23:45:39.633008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.777 [2024-07-11 23:45:39.633044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:18.777 [2024-07-11 23:45:39.644230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190e8d30 00:32:18.777 [2024-07-11 23:45:39.645462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.777 [2024-07-11 23:45:39.645493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:18.777 [2024-07-11 23:45:39.657069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f4f40 00:32:18.777 [2024-07-11 23:45:39.658298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.777 [2024-07-11 23:45:39.658329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:18.777 [2024-07-11 23:45:39.669609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f31b8 00:32:18.777 [2024-07-11 23:45:39.670936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.777 [2024-07-11 23:45:39.670966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:18.777 [2024-07-11 23:45:39.682230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190f3e60 00:32:18.777 [2024-07-11 23:45:39.683345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.777 [2024-07-11 23:45:39.683376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:18.777 [2024-07-11 23:45:39.694821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24dc730) with pdu=0x2000190edd58 00:32:18.777 [2024-07-11 23:45:39.695860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.777 [2024-07-11 23:45:39.695891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:18.777 00:32:18.777 Latency(us) 00:32:18.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.777 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:18.777 nvme0n1 : 2.00 20184.14 78.84 0.00 0.00 6331.75 3131.16 14757.74 00:32:18.777 =================================================================================================================== 00:32:18.777 Total : 20184.14 78.84 0.00 0.00 6331.75 3131.16 14757.74 00:32:18.777 0 00:32:18.777 23:45:39 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:18.777 23:45:39 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:18.777 23:45:39 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:18.777 | .driver_specific 00:32:18.777 | .nvme_error 00:32:18.777 | .status_code 00:32:18.777 | .command_transient_transport_error' 00:32:18.777 23:45:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:19.343 23:45:40 -- host/digest.sh@71 -- # (( 158 > 0 )) 00:32:19.343 23:45:40 -- host/digest.sh@73 -- # killprocess 383840 00:32:19.343 23:45:40 -- common/autotest_common.sh@926 -- # '[' -z 383840 ']' 00:32:19.343 23:45:40 -- common/autotest_common.sh@930 -- # kill -0 383840 00:32:19.343 23:45:40 -- common/autotest_common.sh@931 -- # uname 00:32:19.343 23:45:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:19.343 23:45:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 383840 00:32:19.343 23:45:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:19.343 23:45:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:19.343 23:45:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 383840' 00:32:19.343 killing process with pid 383840 00:32:19.343 23:45:40 -- common/autotest_common.sh@945 -- # kill 383840 00:32:19.343 Received shutdown signal, test time was about 2.000000 seconds 00:32:19.343 00:32:19.343 Latency(us) 00:32:19.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.343 =================================================================================================================== 00:32:19.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:19.343 23:45:40 -- common/autotest_common.sh@950 -- # wait 383840 00:32:19.602 23:45:40 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:32:19.602 23:45:40 -- host/digest.sh@54 -- # local rw bs qd 00:32:19.602 23:45:40 -- host/digest.sh@56 -- # rw=randwrite 00:32:19.602 23:45:40 -- host/digest.sh@56 -- # bs=131072 00:32:19.602 23:45:40 -- host/digest.sh@56 -- # qd=16 00:32:19.602 23:45:40 -- host/digest.sh@58 -- # bperfpid=384418 00:32:19.602 23:45:40 -- host/digest.sh@60 -- # waitforlisten 384418 /var/tmp/bperf.sock 00:32:19.602 23:45:40 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:19.602 23:45:40 -- common/autotest_common.sh@819 -- # '[' -z 384418 ']' 00:32:19.602 23:45:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:19.602 23:45:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:19.602 23:45:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:19.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:19.602 23:45:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:19.602 23:45:40 -- common/autotest_common.sh@10 -- # set +x 00:32:19.602 [2024-07-11 23:45:40.354018] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:19.602 [2024-07-11 23:45:40.354108] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384418 ] 00:32:19.602 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:19.602 Zero copy mechanism will not be used. 00:32:19.602 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.602 [2024-07-11 23:45:40.424359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.602 [2024-07-11 23:45:40.519740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.537 23:45:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:20.538 23:45:41 -- common/autotest_common.sh@852 -- # return 0 00:32:20.538 23:45:41 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:20.538 23:45:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:20.795 23:45:41 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:20.795 23:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:20.795 23:45:41 -- common/autotest_common.sh@10 -- # set +x 00:32:21.052 23:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.052 23:45:41 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.052 23:45:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.308 nvme0n1 00:32:21.308 23:45:42 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:21.308 23:45:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.308 23:45:42 -- common/autotest_common.sh@10 -- # set +x 00:32:21.308 23:45:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.308 23:45:42 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:21.308 23:45:42 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:21.566 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:21.566 Zero copy mechanism will not be used. 00:32:21.566 Running I/O for 2 seconds... 00:32:21.566 [2024-07-11 23:45:42.324413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.566 [2024-07-11 23:45:42.324840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.566 [2024-07-11 23:45:42.324880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.566 [2024-07-11 23:45:42.338796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.566 [2024-07-11 23:45:42.339026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.566 [2024-07-11 23:45:42.339059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.566 [2024-07-11 23:45:42.354384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.566 [2024-07-11 23:45:42.354656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.566 [2024-07-11 23:45:42.354688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.566 [2024-07-11 23:45:42.368452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.566 [2024-07-11 23:45:42.368766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.566 [2024-07-11 23:45:42.368798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.566 [2024-07-11 23:45:42.382071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.566 [2024-07-11 23:45:42.382346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.566 [2024-07-11 23:45:42.382378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.566 [2024-07-11 23:45:42.394856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.566 [2024-07-11 23:45:42.395091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.566 [2024-07-11 23:45:42.395123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.566 [2024-07-11 23:45:42.408263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.566 [2024-07-11 23:45:42.408477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.566 [2024-07-11 23:45:42.408509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.566 [2024-07-11 23:45:42.420553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.566 [2024-07-11 23:45:42.420934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.566 [2024-07-11 23:45:42.420966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.566 [2024-07-11 23:45:42.433065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.566 [2024-07-11 23:45:42.433444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.566 [2024-07-11 23:45:42.433475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.566 [2024-07-11 23:45:42.446578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.566 [2024-07-11 23:45:42.446995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.566 [2024-07-11 23:45:42.447027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.566 [2024-07-11 23:45:42.459741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.566 [2024-07-11 23:45:42.460083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.566 [2024-07-11 23:45:42.460114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.566 [2024-07-11 23:45:42.474401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.566 [2024-07-11 23:45:42.474701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.566 [2024-07-11 23:45:42.474733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.566 [2024-07-11 23:45:42.487860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.566 [2024-07-11 23:45:42.488154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.566 [2024-07-11 23:45:42.488186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.566 [2024-07-11 23:45:42.502246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.566 [2024-07-11 23:45:42.502471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.566 [2024-07-11 23:45:42.502502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.566 [2024-07-11 23:45:42.515692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.516041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.516072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.529054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.529459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.529490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.541590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.541850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.541892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.554702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.554879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.554910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.567848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.568105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.568136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.581627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.581922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.581954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.594460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.594771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.594801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.608373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.608755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.608787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.621541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.621815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.621847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.635414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.635852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.635884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.649744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.650085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.650116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.662317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.662661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.662692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.675305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.675772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.675804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.688430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.688731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.688761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.701538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.701743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.701774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.713728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.714063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.714094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.727989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.728300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.728332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.741011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.741423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.741454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.754531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.754969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.755000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.824 [2024-07-11 23:45:42.768311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:21.824 [2024-07-11 23:45:42.768558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.824 [2024-07-11 23:45:42.768589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.782544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.782825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.782857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.795759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.795975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.796006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.808863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.809094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.809124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.821895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.822153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.822184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.834273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.834593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.834624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.848208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.848560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.848592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.861595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.861924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.861956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.875053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.875279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.875311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.888739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.889048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.889088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.902148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.902502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.902532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.914352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.914642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.914673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.927266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.927569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.927600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.940362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.940771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.940802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.954552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.954976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.955007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.968013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.968330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.968362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.981164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.981432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.981464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:42.994591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.082 [2024-07-11 23:45:42.994991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.082 [2024-07-11 23:45:42.995022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.082 [2024-07-11 23:45:43.008543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.083 [2024-07-11 23:45:43.008841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.083 [2024-07-11 23:45:43.008872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.083 [2024-07-11 23:45:43.022309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.083 [2024-07-11 23:45:43.022605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.083 [2024-07-11 23:45:43.022636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.340 [2024-07-11 23:45:43.036246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.340 [2024-07-11 23:45:43.036587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.340 [2024-07-11 23:45:43.036617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.340 [2024-07-11 23:45:43.049974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.340 [2024-07-11 23:45:43.050292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.340 [2024-07-11 23:45:43.050323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.340 [2024-07-11 23:45:43.062703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.340 [2024-07-11 23:45:43.063101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.340 [2024-07-11 23:45:43.063132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.340 [2024-07-11 23:45:43.076366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.340 [2024-07-11 23:45:43.076778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.340 [2024-07-11 23:45:43.076809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.340 [2024-07-11 23:45:43.089361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.340 [2024-07-11 23:45:43.089623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.340 [2024-07-11 23:45:43.089654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.340 [2024-07-11 23:45:43.103271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.340 [2024-07-11 23:45:43.103585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.340 [2024-07-11 23:45:43.103618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.341 [2024-07-11 23:45:43.116551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.341 [2024-07-11 23:45:43.116771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.341 [2024-07-11 23:45:43.116802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.341 [2024-07-11 23:45:43.130211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.341 [2024-07-11 23:45:43.130643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.341 [2024-07-11 23:45:43.130674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.341 [2024-07-11 23:45:43.143727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.341 [2024-07-11 23:45:43.143936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.341 [2024-07-11 23:45:43.143967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.341 [2024-07-11 23:45:43.156645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.341 [2024-07-11 23:45:43.156946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.341 [2024-07-11 23:45:43.156977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.341 [2024-07-11 23:45:43.171314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.341 [2024-07-11 23:45:43.171753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.341 [2024-07-11 23:45:43.171784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.341 [2024-07-11 23:45:43.184290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.341 [2024-07-11 23:45:43.184627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.341 [2024-07-11 23:45:43.184659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.341 [2024-07-11 23:45:43.197177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.341 [2024-07-11 23:45:43.197457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.341 [2024-07-11 23:45:43.197488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.341 [2024-07-11 23:45:43.210189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.341 [2024-07-11 23:45:43.210455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.341 [2024-07-11 23:45:43.210485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.341 [2024-07-11 23:45:43.224013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.341 [2024-07-11 23:45:43.224317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.341 [2024-07-11 23:45:43.224348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.341 [2024-07-11 23:45:43.237043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.341 [2024-07-11 23:45:43.237331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.341 [2024-07-11 23:45:43.237362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.341 [2024-07-11 23:45:43.250712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.341 [2024-07-11 23:45:43.250986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.341 [2024-07-11 23:45:43.251018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.341 [2024-07-11 23:45:43.263908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.341 [2024-07-11 23:45:43.264276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.341 [2024-07-11 23:45:43.264308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.341 [2024-07-11 23:45:43.278015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.341 [2024-07-11 23:45:43.278467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.341 [2024-07-11 23:45:43.278499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.291487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.291878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.291910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.305217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.305539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.305570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.318618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.318988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.319020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.332454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.332771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.332800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.345968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.346415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.346447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.359931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.360259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.360291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.373271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.373549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.373581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.386789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.387190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.387222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.398575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.398974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.399005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.412339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.412666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.412697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.426211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.426545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.426576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.440269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.440691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.440722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.454602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.454865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.454896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.468518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.468902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.468940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.482476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.482792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.482825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.496595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.496962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.597 [2024-07-11 23:45:43.496993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.597 [2024-07-11 23:45:43.510083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.597 [2024-07-11 23:45:43.510407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.598 [2024-07-11 23:45:43.510438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.598 [2024-07-11 23:45:43.523458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.598 [2024-07-11 23:45:43.523784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.598 [2024-07-11 23:45:43.523814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.598 [2024-07-11 23:45:43.536846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.598 [2024-07-11 23:45:43.537105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.598 [2024-07-11 23:45:43.537136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.550482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.550690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.550721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.563737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.564072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.564103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.576356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.576657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.576687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.590535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.590984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.591015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.603682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.604064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.604094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.617902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.618240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.618271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.630606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.630946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.630977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.644677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.644891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.644922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.658397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.658705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.658736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.672466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.672735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.672766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.686724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.686921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.686952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.699307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.699620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.699650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.712537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.712936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.712968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.725862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.726127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.726166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.739835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.740115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.740155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.752973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.753288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.855 [2024-07-11 23:45:43.753319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.855 [2024-07-11 23:45:43.766984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.855 [2024-07-11 23:45:43.767264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.856 [2024-07-11 23:45:43.767294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.856 [2024-07-11 23:45:43.779990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.856 [2024-07-11 23:45:43.780271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.856 [2024-07-11 23:45:43.780302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.856 [2024-07-11 23:45:43.792952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:22.856 [2024-07-11 23:45:43.793301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.856 [2024-07-11 23:45:43.793332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.113 [2024-07-11 23:45:43.806187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.113 [2024-07-11 23:45:43.806565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.113 [2024-07-11 23:45:43.806596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.113 [2024-07-11 23:45:43.819345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:43.819718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:43.819755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:43.832252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:43.832585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:43.832616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:43.845791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:43.846111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:43.846150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:43.859211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:43.859622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:43.859653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:43.872784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:43.873013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:43.873044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:43.885803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:43.886114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:43.886151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:43.899349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:43.899797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:43.899828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:43.913223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:43.913564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:43.913596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:43.926585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:43.926899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:43.926930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:43.941089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:43.941349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:43.941381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:43.954208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:43.954415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:43.954446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:43.966917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:43.967223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:43.967254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:43.980469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:43.980787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:43.980817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:43.993885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:43.994116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:43.994151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:44.006619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:44.006967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:44.006998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:44.020564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:44.020962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:44.020992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:44.033586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:44.034003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:44.034033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:44.046561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:44.046929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:44.046960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.114 [2024-07-11 23:45:44.059003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.114 [2024-07-11 23:45:44.059339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.114 [2024-07-11 23:45:44.059371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.072131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.072452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.072482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.084966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.085190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.085220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.097282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.097635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.097665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.109753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.110195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.110226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.123358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.123759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.123790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.138009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.138395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.138426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.151549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.151900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.151930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.165423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.165799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.165835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.178736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.179048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.179078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.192396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.192723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.192754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.205739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.205961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.205991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.218159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.218513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.218543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.231763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.231975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.232005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.245731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.246112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.246152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.258539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.258816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.258846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.271029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.271342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.271373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.285387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.285592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.372 [2024-07-11 23:45:44.285623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.372 [2024-07-11 23:45:44.299033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ddc80) with pdu=0x2000190fef90 00:32:23.372 [2024-07-11 23:45:44.299360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.373 [2024-07-11 23:45:44.299391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.373 00:32:23.373 Latency(us) 00:32:23.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.373 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:23.373 nvme0n1 : 2.01 2299.02 287.38 0.00 0.00 6942.50 5267.15 18155.90 00:32:23.373 =================================================================================================================== 00:32:23.373 Total : 2299.02 287.38 0.00 0.00 6942.50 5267.15 18155.90 00:32:23.373 0 00:32:23.630 23:45:44 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:23.630 23:45:44 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:23.630 23:45:44 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:23.630 | .driver_specific 00:32:23.630 | .nvme_error 00:32:23.630 | .status_code 00:32:23.630 | .command_transient_transport_error' 00:32:23.630 23:45:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:23.887 23:45:44 -- host/digest.sh@71 -- # (( 148 > 0 )) 00:32:23.887 23:45:44 -- host/digest.sh@73 -- # killprocess 384418 00:32:23.887 23:45:44 -- common/autotest_common.sh@926 -- # '[' -z 384418 ']' 00:32:23.887 23:45:44 -- common/autotest_common.sh@930 -- # kill -0 384418 00:32:23.887 23:45:44 -- common/autotest_common.sh@931 -- # uname 00:32:23.887 23:45:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:23.887 23:45:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 384418 00:32:23.887 23:45:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:23.887 23:45:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:23.887 23:45:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 384418' 00:32:23.887 killing process with pid 384418 00:32:23.887 23:45:44 -- common/autotest_common.sh@945 -- # kill 384418 00:32:23.887 Received shutdown signal, test time was about 2.000000 seconds 00:32:23.887 00:32:23.887 Latency(us) 00:32:23.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.887 =================================================================================================================== 00:32:23.887 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:23.887 23:45:44 -- common/autotest_common.sh@950 -- # wait 384418 00:32:24.145 23:45:44 -- host/digest.sh@115 -- # killprocess 382794 00:32:24.145 23:45:44 -- common/autotest_common.sh@926 -- # '[' -z 382794 ']' 00:32:24.145 23:45:44 -- common/autotest_common.sh@930 -- # kill -0 382794 00:32:24.145 23:45:44 -- common/autotest_common.sh@931 -- # uname 00:32:24.145 23:45:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:24.145 23:45:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 382794 00:32:24.145 23:45:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:24.145 23:45:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:24.145 23:45:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 382794' 00:32:24.145 killing process with pid 382794 00:32:24.145 23:45:44 -- common/autotest_common.sh@945 -- # kill 382794 00:32:24.145 23:45:44 -- common/autotest_common.sh@950 -- # wait 382794 00:32:24.402 00:32:24.402 real 0m18.437s 00:32:24.402 user 0m38.189s 00:32:24.402 sys 0m4.623s 00:32:24.402 23:45:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:24.402 23:45:45 -- common/autotest_common.sh@10 -- # set +x 00:32:24.402 ************************************ 00:32:24.402 END TEST nvmf_digest_error 00:32:24.402 ************************************ 00:32:24.402 23:45:45 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:32:24.402 23:45:45 -- host/digest.sh@139 -- # nvmftestfini 00:32:24.402 23:45:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:24.402 23:45:45 -- nvmf/common.sh@116 -- # sync 00:32:24.402 23:45:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:24.402 23:45:45 -- nvmf/common.sh@119 -- # set +e 00:32:24.402 23:45:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:24.402 23:45:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:24.402 rmmod nvme_tcp 00:32:24.402 rmmod nvme_fabrics 00:32:24.402 rmmod nvme_keyring 00:32:24.402 23:45:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:24.402 23:45:45 -- nvmf/common.sh@123 -- # set -e 00:32:24.402 23:45:45 -- nvmf/common.sh@124 -- # return 0 00:32:24.402 23:45:45 -- nvmf/common.sh@477 -- # '[' -n 382794 ']' 00:32:24.402 23:45:45 -- nvmf/common.sh@478 -- # killprocess 382794 00:32:24.402 23:45:45 -- common/autotest_common.sh@926 -- # '[' -z 382794 ']' 00:32:24.402 23:45:45 -- common/autotest_common.sh@930 -- # kill -0 382794 00:32:24.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (382794) - No such process 00:32:24.402 23:45:45 -- common/autotest_common.sh@953 -- # echo 'Process with pid 382794 is not found' 00:32:24.402 Process with pid 382794 is not found 00:32:24.402 23:45:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:24.402 23:45:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:24.402 23:45:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:24.402 23:45:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:24.402 23:45:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:24.402 23:45:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.402 23:45:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:24.402 23:45:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.940 23:45:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:26.940 00:32:26.940 real 0m42.338s 00:32:26.940 user 1m18.842s 00:32:26.940 sys 0m11.599s 00:32:26.940 23:45:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:26.940 23:45:47 -- common/autotest_common.sh@10 -- # set +x 00:32:26.940 ************************************ 00:32:26.940 END TEST nvmf_digest 00:32:26.940 ************************************ 00:32:26.940 23:45:47 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:32:26.940 23:45:47 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:32:26.940 23:45:47 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:32:26.940 23:45:47 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:26.940 23:45:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:26.940 23:45:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:26.940 23:45:47 -- common/autotest_common.sh@10 -- # set +x 00:32:26.940 ************************************ 00:32:26.940 START TEST nvmf_bdevperf 00:32:26.940 ************************************ 00:32:26.940 23:45:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:26.940 * Looking for test storage... 00:32:26.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:26.940 23:45:47 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.940 23:45:47 -- nvmf/common.sh@7 -- # uname -s 00:32:26.940 23:45:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.940 23:45:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.940 23:45:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.940 23:45:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.940 23:45:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.940 23:45:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.940 23:45:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.940 23:45:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.940 23:45:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.940 23:45:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.940 23:45:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:26.940 23:45:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:26.940 23:45:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.940 23:45:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.940 23:45:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.940 23:45:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.940 23:45:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.940 23:45:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.940 23:45:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.940 23:45:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.940 23:45:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.940 23:45:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.940 23:45:47 -- paths/export.sh@5 -- # export PATH 00:32:26.940 23:45:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.940 23:45:47 -- nvmf/common.sh@46 -- # : 0 00:32:26.940 23:45:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:26.940 23:45:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:26.940 23:45:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:26.940 23:45:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.940 23:45:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.940 23:45:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:26.940 23:45:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:26.940 23:45:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:26.940 23:45:47 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:26.940 23:45:47 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:26.940 23:45:47 -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:26.940 23:45:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:26.940 23:45:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.940 23:45:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:26.940 23:45:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:26.940 23:45:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:26.940 23:45:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.940 23:45:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:26.940 23:45:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.940 23:45:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:26.940 23:45:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:26.940 23:45:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:26.940 23:45:47 -- common/autotest_common.sh@10 -- # set +x 00:32:29.511 23:45:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:29.511 23:45:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:29.511 23:45:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:29.511 23:45:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:29.511 23:45:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:29.511 23:45:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:29.511 23:45:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:29.511 23:45:50 -- nvmf/common.sh@294 -- # net_devs=() 00:32:29.511 23:45:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:29.511 23:45:50 -- nvmf/common.sh@295 -- # e810=() 00:32:29.511 23:45:50 -- nvmf/common.sh@295 -- # local -ga e810 00:32:29.511 23:45:50 -- nvmf/common.sh@296 -- # x722=() 00:32:29.511 23:45:50 -- nvmf/common.sh@296 -- # local -ga x722 00:32:29.511 23:45:50 -- nvmf/common.sh@297 -- # mlx=() 00:32:29.511 23:45:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:29.511 23:45:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.511 23:45:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.511 23:45:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.511 23:45:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.511 23:45:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.511 23:45:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.511 23:45:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.511 23:45:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.511 23:45:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.511 23:45:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.511 23:45:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.511 23:45:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:29.511 23:45:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:29.511 23:45:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:29.511 23:45:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:29.511 23:45:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:29.511 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:29.511 23:45:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:29.511 23:45:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:29.511 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:29.511 23:45:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:29.511 23:45:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:29.511 23:45:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.511 23:45:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:29.511 23:45:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.511 23:45:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:29.511 Found net devices under 0000:84:00.0: cvl_0_0 00:32:29.511 23:45:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.511 23:45:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:29.511 23:45:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.511 23:45:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:29.511 23:45:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.511 23:45:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:29.511 Found net devices under 0000:84:00.1: cvl_0_1 00:32:29.511 23:45:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.511 23:45:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:29.511 23:45:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:29.511 23:45:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:29.511 23:45:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.511 23:45:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.511 23:45:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.511 23:45:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:29.511 23:45:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.511 23:45:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.511 23:45:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:29.511 23:45:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.511 23:45:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.511 23:45:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:29.511 23:45:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:29.511 23:45:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.511 23:45:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.511 23:45:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.511 23:45:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.511 23:45:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:29.511 23:45:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.511 23:45:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.511 23:45:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.511 23:45:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:29.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:32:29.511 00:32:29.511 --- 10.0.0.2 ping statistics --- 00:32:29.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.511 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:32:29.511 23:45:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:32:29.511 00:32:29.511 --- 10.0.0.1 ping statistics --- 00:32:29.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.511 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:32:29.511 23:45:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.511 23:45:50 -- nvmf/common.sh@410 -- # return 0 00:32:29.511 23:45:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:29.511 23:45:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.511 23:45:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:29.511 23:45:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.511 23:45:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:29.511 23:45:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:29.511 23:45:50 -- host/bdevperf.sh@25 -- # tgt_init 00:32:29.511 23:45:50 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:29.511 23:45:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:29.511 23:45:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:29.511 23:45:50 -- common/autotest_common.sh@10 -- # set +x 00:32:29.511 23:45:50 -- nvmf/common.sh@469 -- # nvmfpid=386996 00:32:29.511 23:45:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:29.512 23:45:50 -- nvmf/common.sh@470 -- # waitforlisten 386996 00:32:29.512 23:45:50 -- common/autotest_common.sh@819 -- # '[' -z 386996 ']' 00:32:29.512 23:45:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.512 23:45:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:29.512 23:45:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.512 23:45:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:29.512 23:45:50 -- common/autotest_common.sh@10 -- # set +x 00:32:29.512 [2024-07-11 23:45:50.297378] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:29.512 [2024-07-11 23:45:50.297510] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:29.512 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.512 [2024-07-11 23:45:50.412516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:29.770 [2024-07-11 23:45:50.523820] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:29.770 [2024-07-11 23:45:50.523979] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:29.770 [2024-07-11 23:45:50.523999] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:29.770 [2024-07-11 23:45:50.524013] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:29.770 [2024-07-11 23:45:50.524254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:29.770 [2024-07-11 23:45:50.528181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:29.770 [2024-07-11 23:45:50.528195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.770 23:45:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:29.770 23:45:50 -- common/autotest_common.sh@852 -- # return 0 00:32:29.770 23:45:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:29.770 23:45:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:29.770 23:45:50 -- common/autotest_common.sh@10 -- # set +x 00:32:29.770 23:45:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:29.770 23:45:50 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:29.770 23:45:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:29.770 23:45:50 -- common/autotest_common.sh@10 -- # set +x 00:32:29.770 [2024-07-11 23:45:50.711691] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.770 23:45:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:29.770 23:45:50 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:29.770 23:45:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:29.770 23:45:50 -- common/autotest_common.sh@10 -- # set +x 00:32:30.028 Malloc0 00:32:30.028 23:45:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:30.028 23:45:50 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:30.028 23:45:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:30.028 23:45:50 -- common/autotest_common.sh@10 -- # set +x 00:32:30.028 23:45:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:30.028 23:45:50 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:30.028 23:45:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:30.028 23:45:50 -- common/autotest_common.sh@10 -- # set +x 00:32:30.028 23:45:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:30.028 23:45:50 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:30.028 23:45:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:30.028 23:45:50 -- common/autotest_common.sh@10 -- # set +x 00:32:30.028 [2024-07-11 23:45:50.773904] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:30.028 23:45:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:30.028 23:45:50 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:30.028 23:45:50 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:30.028 23:45:50 -- nvmf/common.sh@520 -- # config=() 00:32:30.028 23:45:50 -- nvmf/common.sh@520 -- # local subsystem config 00:32:30.028 23:45:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:30.028 23:45:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:30.028 { 00:32:30.028 "params": { 00:32:30.028 "name": "Nvme$subsystem", 00:32:30.028 "trtype": "$TEST_TRANSPORT", 00:32:30.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:30.028 "adrfam": "ipv4", 00:32:30.028 "trsvcid": "$NVMF_PORT", 00:32:30.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:30.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:30.028 "hdgst": ${hdgst:-false}, 00:32:30.028 "ddgst": ${ddgst:-false} 00:32:30.028 }, 00:32:30.028 "method": "bdev_nvme_attach_controller" 00:32:30.028 } 00:32:30.028 EOF 00:32:30.028 )") 00:32:30.028 23:45:50 -- nvmf/common.sh@542 -- # cat 00:32:30.028 23:45:50 -- nvmf/common.sh@544 -- # jq . 00:32:30.028 23:45:50 -- nvmf/common.sh@545 -- # IFS=, 00:32:30.028 23:45:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:30.028 "params": { 00:32:30.028 "name": "Nvme1", 00:32:30.028 "trtype": "tcp", 00:32:30.028 "traddr": "10.0.0.2", 00:32:30.028 "adrfam": "ipv4", 00:32:30.028 "trsvcid": "4420", 00:32:30.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:30.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:30.028 "hdgst": false, 00:32:30.028 "ddgst": false 00:32:30.028 }, 00:32:30.028 "method": "bdev_nvme_attach_controller" 00:32:30.028 }' 00:32:30.028 [2024-07-11 23:45:50.828074] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:30.028 [2024-07-11 23:45:50.828181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387063 ] 00:32:30.028 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.028 [2024-07-11 23:45:50.901017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.285 [2024-07-11 23:45:50.986481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.285 Running I/O for 1 seconds... 00:32:31.656 00:32:31.656 Latency(us) 00:32:31.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.656 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:31.656 Verification LBA range: start 0x0 length 0x4000 00:32:31.656 Nvme1n1 : 1.01 13231.13 51.68 0.00 0.00 9633.12 1413.88 15728.64 00:32:31.656 =================================================================================================================== 00:32:31.656 Total : 13231.13 51.68 0.00 0.00 9633.12 1413.88 15728.64 00:32:31.656 23:45:52 -- host/bdevperf.sh@30 -- # bdevperfpid=387291 00:32:31.656 23:45:52 -- host/bdevperf.sh@32 -- # sleep 3 00:32:31.656 23:45:52 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:31.656 23:45:52 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:31.656 23:45:52 -- nvmf/common.sh@520 -- # config=() 00:32:31.656 23:45:52 -- nvmf/common.sh@520 -- # local subsystem config 00:32:31.656 23:45:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:31.656 23:45:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:31.656 { 00:32:31.656 "params": { 00:32:31.656 "name": "Nvme$subsystem", 00:32:31.656 "trtype": "$TEST_TRANSPORT", 00:32:31.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.656 "adrfam": "ipv4", 00:32:31.656 "trsvcid": "$NVMF_PORT", 00:32:31.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.656 "hdgst": ${hdgst:-false}, 00:32:31.656 "ddgst": ${ddgst:-false} 00:32:31.656 }, 00:32:31.656 "method": "bdev_nvme_attach_controller" 00:32:31.656 } 00:32:31.656 EOF 00:32:31.656 )") 00:32:31.656 23:45:52 -- nvmf/common.sh@542 -- # cat 00:32:31.656 23:45:52 -- nvmf/common.sh@544 -- # jq . 00:32:31.656 23:45:52 -- nvmf/common.sh@545 -- # IFS=, 00:32:31.656 23:45:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:31.656 "params": { 00:32:31.656 "name": "Nvme1", 00:32:31.656 "trtype": "tcp", 00:32:31.656 "traddr": "10.0.0.2", 00:32:31.656 "adrfam": "ipv4", 00:32:31.656 "trsvcid": "4420", 00:32:31.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:31.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:31.656 "hdgst": false, 00:32:31.656 "ddgst": false 00:32:31.656 }, 00:32:31.656 "method": "bdev_nvme_attach_controller" 00:32:31.656 }' 00:32:31.656 [2024-07-11 23:45:52.473100] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:31.656 [2024-07-11 23:45:52.473226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387291 ] 00:32:31.656 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.656 [2024-07-11 23:45:52.545835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.913 [2024-07-11 23:45:52.629217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.913 Running I/O for 15 seconds... 00:32:35.199 23:45:55 -- host/bdevperf.sh@33 -- # kill -9 386996 00:32:35.199 23:45:55 -- host/bdevperf.sh@35 -- # sleep 3 00:32:35.199 [2024-07-11 23:45:55.436218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.199 [2024-07-11 23:45:55.436858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.199 [2024-07-11 23:45:55.436873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.436890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.436906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.436922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.436940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.436957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.436973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.436990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.200 [2024-07-11 23:45:55.437137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.200 [2024-07-11 23:45:55.437235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.200 [2024-07-11 23:45:55.437832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.200 [2024-07-11 23:45:55.437929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.437978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.437994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.438011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.438031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.438048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.438073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.438090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.438105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.438122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.438137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.438162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.438192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.438208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.438222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.438237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.438251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.438266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.438280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.438295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.200 [2024-07-11 23:45:55.438318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.200 [2024-07-11 23:45:55.438333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.438347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.201 [2024-07-11 23:45:55.438376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.201 [2024-07-11 23:45:55.438416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.438464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.438501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.438534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.438567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.438600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.438633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.438665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.438698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.438730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.438762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.438794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.201 [2024-07-11 23:45:55.438827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.201 [2024-07-11 23:45:55.438859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.438896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.201 [2024-07-11 23:45:55.438929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.438962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.438979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.201 [2024-07-11 23:45:55.438995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.201 [2024-07-11 23:45:55.439422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.201 [2024-07-11 23:45:55.439503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.201 [2024-07-11 23:45:55.439568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.201 [2024-07-11 23:45:55.439666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.201 [2024-07-11 23:45:55.439699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.201 [2024-07-11 23:45:55.439753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.201 [2024-07-11 23:45:55.439769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.439785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.202 [2024-07-11 23:45:55.439801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.439817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.202 [2024-07-11 23:45:55.439833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.439849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.439865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.439882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.202 [2024-07-11 23:45:55.439897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.439914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.439930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.439946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.202 [2024-07-11 23:45:55.439962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.439979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.439994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.440027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.202 [2024-07-11 23:45:55.440059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.202 [2024-07-11 23:45:55.440091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.202 [2024-07-11 23:45:55.440123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.440170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.202 [2024-07-11 23:45:55.440221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.202 [2024-07-11 23:45:55.440250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.440279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.440308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.440337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.202 [2024-07-11 23:45:55.440366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.440395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.440441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.440474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.440506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.440538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.440570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.202 [2024-07-11 23:45:55.440607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b4d90 is same with the state(5) to be set 00:32:35.202 [2024-07-11 23:45:55.440642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:35.202 [2024-07-11 23:45:55.440655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:35.202 [2024-07-11 23:45:55.440667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:8 PRP1 0x0 PRP2 0x0 00:32:35.202 [2024-07-11 23:45:55.440681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.202 [2024-07-11 23:45:55.440750] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20b4d90 was disconnected and freed. reset controller. 00:32:35.202 [2024-07-11 23:45:55.443471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.202 [2024-07-11 23:45:55.443553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.202 [2024-07-11 23:45:55.444310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-07-11 23:45:55.444574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-07-11 23:45:55.444602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.202 [2024-07-11 23:45:55.444619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.202 [2024-07-11 23:45:55.444787] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.202 [2024-07-11 23:45:55.444919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.202 [2024-07-11 23:45:55.444942] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.202 [2024-07-11 23:45:55.444961] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.202 [2024-07-11 23:45:55.447471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.202 [2024-07-11 23:45:55.456374] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.202 [2024-07-11 23:45:55.456855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-07-11 23:45:55.457210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-07-11 23:45:55.457237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.202 [2024-07-11 23:45:55.457253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.202 [2024-07-11 23:45:55.457369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.202 [2024-07-11 23:45:55.457525] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.202 [2024-07-11 23:45:55.457548] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.202 [2024-07-11 23:45:55.457564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.202 [2024-07-11 23:45:55.459839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.202 [2024-07-11 23:45:55.468899] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.202 [2024-07-11 23:45:55.469367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-07-11 23:45:55.469684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-07-11 23:45:55.469735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.202 [2024-07-11 23:45:55.469752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.202 [2024-07-11 23:45:55.469917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.202 [2024-07-11 23:45:55.470085] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.202 [2024-07-11 23:45:55.470109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.202 [2024-07-11 23:45:55.470124] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.202 [2024-07-11 23:45:55.472341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.202 [2024-07-11 23:45:55.481353] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.202 [2024-07-11 23:45:55.481811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-07-11 23:45:55.482104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.202 [2024-07-11 23:45:55.482164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.202 [2024-07-11 23:45:55.482182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.203 [2024-07-11 23:45:55.482352] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.203 [2024-07-11 23:45:55.482525] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.203 [2024-07-11 23:45:55.482548] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.203 [2024-07-11 23:45:55.482564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.203 [2024-07-11 23:45:55.484785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.203 [2024-07-11 23:45:55.494009] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.203 [2024-07-11 23:45:55.494474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.494834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.494883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.203 [2024-07-11 23:45:55.494901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.203 [2024-07-11 23:45:55.495101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.203 [2024-07-11 23:45:55.495280] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.203 [2024-07-11 23:45:55.495304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.203 [2024-07-11 23:45:55.495319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.203 [2024-07-11 23:45:55.497707] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.203 [2024-07-11 23:45:55.506600] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.203 [2024-07-11 23:45:55.507105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.507569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.507613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.203 [2024-07-11 23:45:55.507633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.203 [2024-07-11 23:45:55.507768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.203 [2024-07-11 23:45:55.507956] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.203 [2024-07-11 23:45:55.507980] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.203 [2024-07-11 23:45:55.507995] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.203 [2024-07-11 23:45:55.510368] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.203 [2024-07-11 23:45:55.519180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.203 [2024-07-11 23:45:55.519583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.519858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.519907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.203 [2024-07-11 23:45:55.519925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.203 [2024-07-11 23:45:55.520090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.203 [2024-07-11 23:45:55.520325] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.203 [2024-07-11 23:45:55.520350] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.203 [2024-07-11 23:45:55.520366] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.203 [2024-07-11 23:45:55.522524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.203 [2024-07-11 23:45:55.531561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.203 [2024-07-11 23:45:55.532087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.532516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.532559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.203 [2024-07-11 23:45:55.532579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.203 [2024-07-11 23:45:55.532732] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.203 [2024-07-11 23:45:55.532920] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.203 [2024-07-11 23:45:55.532943] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.203 [2024-07-11 23:45:55.532959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.203 [2024-07-11 23:45:55.535137] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.203 [2024-07-11 23:45:55.544135] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.203 [2024-07-11 23:45:55.544565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.544870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.544919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.203 [2024-07-11 23:45:55.544943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.203 [2024-07-11 23:45:55.545109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.203 [2024-07-11 23:45:55.545288] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.203 [2024-07-11 23:45:55.545313] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.203 [2024-07-11 23:45:55.545329] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.203 [2024-07-11 23:45:55.547611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.203 [2024-07-11 23:45:55.556866] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.203 [2024-07-11 23:45:55.557267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.557454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.557482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.203 [2024-07-11 23:45:55.557499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.203 [2024-07-11 23:45:55.557646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.203 [2024-07-11 23:45:55.557778] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.203 [2024-07-11 23:45:55.557802] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.203 [2024-07-11 23:45:55.557817] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.203 [2024-07-11 23:45:55.560200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.203 [2024-07-11 23:45:55.569330] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.203 [2024-07-11 23:45:55.569869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.570163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.570204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.203 [2024-07-11 23:45:55.570224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.203 [2024-07-11 23:45:55.570378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.203 [2024-07-11 23:45:55.570548] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.203 [2024-07-11 23:45:55.570572] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.203 [2024-07-11 23:45:55.570587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.203 [2024-07-11 23:45:55.572852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.203 [2024-07-11 23:45:55.581978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.203 [2024-07-11 23:45:55.582400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.582703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.203 [2024-07-11 23:45:55.582752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.203 [2024-07-11 23:45:55.582770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.203 [2024-07-11 23:45:55.582906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.203 [2024-07-11 23:45:55.583092] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.203 [2024-07-11 23:45:55.583116] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.204 [2024-07-11 23:45:55.583131] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.204 [2024-07-11 23:45:55.585755] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.204 [2024-07-11 23:45:55.594609] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.204 [2024-07-11 23:45:55.595076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.595498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.595542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.204 [2024-07-11 23:45:55.595562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.204 [2024-07-11 23:45:55.595769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.204 [2024-07-11 23:45:55.595938] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.204 [2024-07-11 23:45:55.595962] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.204 [2024-07-11 23:45:55.595977] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.204 [2024-07-11 23:45:55.598456] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.204 [2024-07-11 23:45:55.607235] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.204 [2024-07-11 23:45:55.607668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.608010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.608060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.204 [2024-07-11 23:45:55.608078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.204 [2024-07-11 23:45:55.608237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.204 [2024-07-11 23:45:55.608388] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.204 [2024-07-11 23:45:55.608411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.204 [2024-07-11 23:45:55.608426] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.204 [2024-07-11 23:45:55.611089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.204 [2024-07-11 23:45:55.619860] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.204 [2024-07-11 23:45:55.620260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.620521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.620549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.204 [2024-07-11 23:45:55.620567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.204 [2024-07-11 23:45:55.620756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.204 [2024-07-11 23:45:55.620943] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.204 [2024-07-11 23:45:55.620967] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.204 [2024-07-11 23:45:55.620982] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.204 [2024-07-11 23:45:55.623343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.204 [2024-07-11 23:45:55.632588] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.204 [2024-07-11 23:45:55.632997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.633257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.633286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.204 [2024-07-11 23:45:55.633303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.204 [2024-07-11 23:45:55.633468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.204 [2024-07-11 23:45:55.633654] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.204 [2024-07-11 23:45:55.633677] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.204 [2024-07-11 23:45:55.633693] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.204 [2024-07-11 23:45:55.636165] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.204 [2024-07-11 23:45:55.645222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.204 [2024-07-11 23:45:55.645625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.645933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.645983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.204 [2024-07-11 23:45:55.646000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.204 [2024-07-11 23:45:55.646197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.204 [2024-07-11 23:45:55.646384] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.204 [2024-07-11 23:45:55.646408] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.204 [2024-07-11 23:45:55.646423] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.204 [2024-07-11 23:45:55.648993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.204 [2024-07-11 23:45:55.657769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.204 [2024-07-11 23:45:55.658328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.658664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.658715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.204 [2024-07-11 23:45:55.658733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.204 [2024-07-11 23:45:55.658923] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.204 [2024-07-11 23:45:55.659136] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.204 [2024-07-11 23:45:55.659175] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.204 [2024-07-11 23:45:55.659191] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.204 [2024-07-11 23:45:55.661566] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.204 [2024-07-11 23:45:55.670379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.204 [2024-07-11 23:45:55.670983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.671351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.671383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.204 [2024-07-11 23:45:55.671401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.204 [2024-07-11 23:45:55.671590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.204 [2024-07-11 23:45:55.671778] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.204 [2024-07-11 23:45:55.671802] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.204 [2024-07-11 23:45:55.671818] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.204 [2024-07-11 23:45:55.674087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.204 [2024-07-11 23:45:55.682986] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.204 [2024-07-11 23:45:55.683553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.683907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.683958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.204 [2024-07-11 23:45:55.683977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.204 [2024-07-11 23:45:55.684199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.204 [2024-07-11 23:45:55.684352] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.204 [2024-07-11 23:45:55.684376] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.204 [2024-07-11 23:45:55.684392] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.204 [2024-07-11 23:45:55.686665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.204 [2024-07-11 23:45:55.695564] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.204 [2024-07-11 23:45:55.695928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.696148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.696178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.204 [2024-07-11 23:45:55.696195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.204 [2024-07-11 23:45:55.696306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.204 [2024-07-11 23:45:55.696492] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.204 [2024-07-11 23:45:55.696522] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.204 [2024-07-11 23:45:55.696538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.204 [2024-07-11 23:45:55.698735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.204 [2024-07-11 23:45:55.708210] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.204 [2024-07-11 23:45:55.708678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.708977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.204 [2024-07-11 23:45:55.709026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.204 [2024-07-11 23:45:55.709043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.204 [2024-07-11 23:45:55.709223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.204 [2024-07-11 23:45:55.709428] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.204 [2024-07-11 23:45:55.709452] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.204 [2024-07-11 23:45:55.709467] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.205 [2024-07-11 23:45:55.711768] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.205 [2024-07-11 23:45:55.720855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.205 [2024-07-11 23:45:55.721306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.721504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.721533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.205 [2024-07-11 23:45:55.721550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.205 [2024-07-11 23:45:55.721661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.205 [2024-07-11 23:45:55.721829] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.205 [2024-07-11 23:45:55.721852] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.205 [2024-07-11 23:45:55.721867] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.205 [2024-07-11 23:45:55.724176] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.205 [2024-07-11 23:45:55.733591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.205 [2024-07-11 23:45:55.734038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.734266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.734295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.205 [2024-07-11 23:45:55.734313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.205 [2024-07-11 23:45:55.734460] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.205 [2024-07-11 23:45:55.734610] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.205 [2024-07-11 23:45:55.734633] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.205 [2024-07-11 23:45:55.734655] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.205 [2024-07-11 23:45:55.736905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.205 [2024-07-11 23:45:55.746211] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.205 [2024-07-11 23:45:55.746729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.747069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.747119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.205 [2024-07-11 23:45:55.747137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.205 [2024-07-11 23:45:55.747296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.205 [2024-07-11 23:45:55.747446] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.205 [2024-07-11 23:45:55.747470] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.205 [2024-07-11 23:45:55.747485] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.205 [2024-07-11 23:45:55.749781] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.205 [2024-07-11 23:45:55.758852] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.205 [2024-07-11 23:45:55.759341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.759604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.759655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.205 [2024-07-11 23:45:55.759672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.205 [2024-07-11 23:45:55.759873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.205 [2024-07-11 23:45:55.760079] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.205 [2024-07-11 23:45:55.760102] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.205 [2024-07-11 23:45:55.760117] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.205 [2024-07-11 23:45:55.762495] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.205 [2024-07-11 23:45:55.771412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.205 [2024-07-11 23:45:55.771968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.772302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.772334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.205 [2024-07-11 23:45:55.772352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.205 [2024-07-11 23:45:55.772488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.205 [2024-07-11 23:45:55.772658] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.205 [2024-07-11 23:45:55.772682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.205 [2024-07-11 23:45:55.772697] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.205 [2024-07-11 23:45:55.775092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.205 [2024-07-11 23:45:55.784207] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.205 [2024-07-11 23:45:55.784794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.785128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.785195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.205 [2024-07-11 23:45:55.785214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.205 [2024-07-11 23:45:55.785403] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.205 [2024-07-11 23:45:55.785593] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.205 [2024-07-11 23:45:55.785616] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.205 [2024-07-11 23:45:55.785631] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.205 [2024-07-11 23:45:55.788085] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.205 [2024-07-11 23:45:55.796644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.205 [2024-07-11 23:45:55.797071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.797457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.797502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.205 [2024-07-11 23:45:55.797522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.205 [2024-07-11 23:45:55.797693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.205 [2024-07-11 23:45:55.797899] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.205 [2024-07-11 23:45:55.797923] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.205 [2024-07-11 23:45:55.797938] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.205 [2024-07-11 23:45:55.800270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.205 [2024-07-11 23:45:55.809300] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.205 [2024-07-11 23:45:55.809778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.810107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.810170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.205 [2024-07-11 23:45:55.810189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.205 [2024-07-11 23:45:55.810318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.205 [2024-07-11 23:45:55.810523] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.205 [2024-07-11 23:45:55.810547] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.205 [2024-07-11 23:45:55.810562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.205 [2024-07-11 23:45:55.812916] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.205 [2024-07-11 23:45:55.822002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.205 [2024-07-11 23:45:55.822405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.822664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.822712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.205 [2024-07-11 23:45:55.822730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.205 [2024-07-11 23:45:55.822912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.205 [2024-07-11 23:45:55.823082] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.205 [2024-07-11 23:45:55.823105] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.205 [2024-07-11 23:45:55.823120] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.205 [2024-07-11 23:45:55.825371] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.205 [2024-07-11 23:45:55.834486] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.205 [2024-07-11 23:45:55.834907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.835228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.205 [2024-07-11 23:45:55.835257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.205 [2024-07-11 23:45:55.835274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.205 [2024-07-11 23:45:55.835457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.205 [2024-07-11 23:45:55.835608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.205 [2024-07-11 23:45:55.835631] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.205 [2024-07-11 23:45:55.835646] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.205 [2024-07-11 23:45:55.838092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.206 [2024-07-11 23:45:55.847134] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.206 [2024-07-11 23:45:55.847710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.847997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.848049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.206 [2024-07-11 23:45:55.848068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.206 [2024-07-11 23:45:55.848239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.206 [2024-07-11 23:45:55.848428] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.206 [2024-07-11 23:45:55.848452] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.206 [2024-07-11 23:45:55.848467] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.206 [2024-07-11 23:45:55.850840] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.206 [2024-07-11 23:45:55.859415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.206 [2024-07-11 23:45:55.859927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.860254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.860286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.206 [2024-07-11 23:45:55.860305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.206 [2024-07-11 23:45:55.860494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.206 [2024-07-11 23:45:55.860682] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.206 [2024-07-11 23:45:55.860706] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.206 [2024-07-11 23:45:55.860721] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.206 [2024-07-11 23:45:55.863061] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.206 [2024-07-11 23:45:55.871926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.206 [2024-07-11 23:45:55.872337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.872714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.872764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.206 [2024-07-11 23:45:55.872782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.206 [2024-07-11 23:45:55.873001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.206 [2024-07-11 23:45:55.873220] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.206 [2024-07-11 23:45:55.873245] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.206 [2024-07-11 23:45:55.873260] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.206 [2024-07-11 23:45:55.875699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.206 [2024-07-11 23:45:55.884626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.206 [2024-07-11 23:45:55.885190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.885597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.885641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.206 [2024-07-11 23:45:55.885661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.206 [2024-07-11 23:45:55.885831] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.206 [2024-07-11 23:45:55.886002] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.206 [2024-07-11 23:45:55.886025] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.206 [2024-07-11 23:45:55.886041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.206 [2024-07-11 23:45:55.888137] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.206 [2024-07-11 23:45:55.897289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.206 [2024-07-11 23:45:55.897729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.898094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.898162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.206 [2024-07-11 23:45:55.898182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.206 [2024-07-11 23:45:55.898366] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.206 [2024-07-11 23:45:55.898535] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.206 [2024-07-11 23:45:55.898559] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.206 [2024-07-11 23:45:55.898574] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.206 [2024-07-11 23:45:55.900945] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.206 [2024-07-11 23:45:55.909750] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.206 [2024-07-11 23:45:55.910381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.910777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.910826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.206 [2024-07-11 23:45:55.910845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.206 [2024-07-11 23:45:55.910998] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.206 [2024-07-11 23:45:55.911131] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.206 [2024-07-11 23:45:55.911169] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.206 [2024-07-11 23:45:55.911186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.206 [2024-07-11 23:45:55.913416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.206 [2024-07-11 23:45:55.922293] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.206 [2024-07-11 23:45:55.922864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.923193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.923224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.206 [2024-07-11 23:45:55.923243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.206 [2024-07-11 23:45:55.923432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.206 [2024-07-11 23:45:55.923602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.206 [2024-07-11 23:45:55.923625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.206 [2024-07-11 23:45:55.923640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.206 [2024-07-11 23:45:55.926009] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.206 [2024-07-11 23:45:55.934884] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.206 [2024-07-11 23:45:55.935287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.935573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.935604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.206 [2024-07-11 23:45:55.935628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.206 [2024-07-11 23:45:55.935813] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.206 [2024-07-11 23:45:55.935946] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.206 [2024-07-11 23:45:55.935969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.206 [2024-07-11 23:45:55.935984] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.206 [2024-07-11 23:45:55.938170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.206 [2024-07-11 23:45:55.947424] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.206 [2024-07-11 23:45:55.947832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.948085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.948134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.206 [2024-07-11 23:45:55.948161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.206 [2024-07-11 23:45:55.948327] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.206 [2024-07-11 23:45:55.948478] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.206 [2024-07-11 23:45:55.948501] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.206 [2024-07-11 23:45:55.948516] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.206 [2024-07-11 23:45:55.950814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.206 [2024-07-11 23:45:55.959991] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.206 [2024-07-11 23:45:55.960572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.960890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.206 [2024-07-11 23:45:55.960942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.206 [2024-07-11 23:45:55.960961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.206 [2024-07-11 23:45:55.961132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.206 [2024-07-11 23:45:55.961316] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.206 [2024-07-11 23:45:55.961340] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.206 [2024-07-11 23:45:55.961356] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.206 [2024-07-11 23:45:55.963839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.206 [2024-07-11 23:45:55.972467] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.207 [2024-07-11 23:45:55.972921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:55.973167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:55.973196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.207 [2024-07-11 23:45:55.973214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.207 [2024-07-11 23:45:55.973423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.207 [2024-07-11 23:45:55.973592] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.207 [2024-07-11 23:45:55.973615] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.207 [2024-07-11 23:45:55.973631] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.207 [2024-07-11 23:45:55.975858] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.207 [2024-07-11 23:45:55.984942] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.207 [2024-07-11 23:45:55.985399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:55.985819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:55.985863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.207 [2024-07-11 23:45:55.985883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.207 [2024-07-11 23:45:55.986054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.207 [2024-07-11 23:45:55.986263] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.207 [2024-07-11 23:45:55.986299] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.207 [2024-07-11 23:45:55.986316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.207 [2024-07-11 23:45:55.988549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.207 [2024-07-11 23:45:55.997495] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.207 [2024-07-11 23:45:55.998157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:55.998583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:55.998627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.207 [2024-07-11 23:45:55.998647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.207 [2024-07-11 23:45:55.998855] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.207 [2024-07-11 23:45:55.999007] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.207 [2024-07-11 23:45:55.999030] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.207 [2024-07-11 23:45:55.999046] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.207 [2024-07-11 23:45:56.001307] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.207 [2024-07-11 23:45:56.010127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.207 [2024-07-11 23:45:56.010610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:56.010977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:56.011024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.207 [2024-07-11 23:45:56.011042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.207 [2024-07-11 23:45:56.011184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.207 [2024-07-11 23:45:56.011360] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.207 [2024-07-11 23:45:56.011384] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.207 [2024-07-11 23:45:56.011399] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.207 [2024-07-11 23:45:56.013606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.207 [2024-07-11 23:45:56.022771] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.207 [2024-07-11 23:45:56.023357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:56.023778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:56.023823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.207 [2024-07-11 23:45:56.023842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.207 [2024-07-11 23:45:56.024032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.207 [2024-07-11 23:45:56.024218] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.207 [2024-07-11 23:45:56.024243] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.207 [2024-07-11 23:45:56.024259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.207 [2024-07-11 23:45:56.026607] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.207 [2024-07-11 23:45:56.035322] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.207 [2024-07-11 23:45:56.035753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:56.036198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:56.036229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.207 [2024-07-11 23:45:56.036247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.207 [2024-07-11 23:45:56.036437] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.207 [2024-07-11 23:45:56.036607] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.207 [2024-07-11 23:45:56.036630] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.207 [2024-07-11 23:45:56.036646] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.207 [2024-07-11 23:45:56.039091] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.207 [2024-07-11 23:45:56.048125] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.207 [2024-07-11 23:45:56.048640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:56.048962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:56.049015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.207 [2024-07-11 23:45:56.049033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.207 [2024-07-11 23:45:56.049220] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.207 [2024-07-11 23:45:56.049337] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.207 [2024-07-11 23:45:56.049367] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.207 [2024-07-11 23:45:56.049383] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.207 [2024-07-11 23:45:56.051720] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.207 [2024-07-11 23:45:56.060531] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.207 [2024-07-11 23:45:56.061020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:56.061353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:56.061398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.207 [2024-07-11 23:45:56.061418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.207 [2024-07-11 23:45:56.061625] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.207 [2024-07-11 23:45:56.061777] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.207 [2024-07-11 23:45:56.061801] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.207 [2024-07-11 23:45:56.061816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.207 [2024-07-11 23:45:56.064121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.207 [2024-07-11 23:45:56.073039] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.207 [2024-07-11 23:45:56.073594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:56.074009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:56.074064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.207 [2024-07-11 23:45:56.074082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.207 [2024-07-11 23:45:56.074286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.207 [2024-07-11 23:45:56.074457] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.207 [2024-07-11 23:45:56.074482] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.207 [2024-07-11 23:45:56.074497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.207 [2024-07-11 23:45:56.076725] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.207 [2024-07-11 23:45:56.085657] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.207 [2024-07-11 23:45:56.086113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:56.086385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.207 [2024-07-11 23:45:56.086414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.207 [2024-07-11 23:45:56.086432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.208 [2024-07-11 23:45:56.086615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.208 [2024-07-11 23:45:56.086749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.208 [2024-07-11 23:45:56.086772] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.208 [2024-07-11 23:45:56.086794] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.208 [2024-07-11 23:45:56.089009] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.208 [2024-07-11 23:45:56.098242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.208 [2024-07-11 23:45:56.098692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-07-11 23:45:56.098996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-07-11 23:45:56.099025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.208 [2024-07-11 23:45:56.099042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.208 [2024-07-11 23:45:56.099166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.208 [2024-07-11 23:45:56.099317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.208 [2024-07-11 23:45:56.099340] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.208 [2024-07-11 23:45:56.099355] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.208 [2024-07-11 23:45:56.101582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.208 [2024-07-11 23:45:56.110832] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.208 [2024-07-11 23:45:56.111398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-07-11 23:45:56.111767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-07-11 23:45:56.111818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.208 [2024-07-11 23:45:56.111836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.208 [2024-07-11 23:45:56.112026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.208 [2024-07-11 23:45:56.112211] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.208 [2024-07-11 23:45:56.112236] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.208 [2024-07-11 23:45:56.112251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.208 [2024-07-11 23:45:56.114553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.208 [2024-07-11 23:45:56.123275] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.208 [2024-07-11 23:45:56.123854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-07-11 23:45:56.124194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-07-11 23:45:56.124226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.208 [2024-07-11 23:45:56.124244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.208 [2024-07-11 23:45:56.124434] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.208 [2024-07-11 23:45:56.124586] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.208 [2024-07-11 23:45:56.124610] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.208 [2024-07-11 23:45:56.124625] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.208 [2024-07-11 23:45:56.126780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.208 [2024-07-11 23:45:56.136096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.208 [2024-07-11 23:45:56.136688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-07-11 23:45:56.137085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.208 [2024-07-11 23:45:56.137150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.208 [2024-07-11 23:45:56.137172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.208 [2024-07-11 23:45:56.137344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.208 [2024-07-11 23:45:56.137478] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.208 [2024-07-11 23:45:56.137502] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.208 [2024-07-11 23:45:56.137517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.208 [2024-07-11 23:45:56.139843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.468 [2024-07-11 23:45:56.148790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.468 [2024-07-11 23:45:56.149212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.149430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.149458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.468 [2024-07-11 23:45:56.149477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.468 [2024-07-11 23:45:56.149643] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.468 [2024-07-11 23:45:56.149829] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.468 [2024-07-11 23:45:56.149853] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.468 [2024-07-11 23:45:56.149868] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.468 [2024-07-11 23:45:56.152181] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.468 [2024-07-11 23:45:56.161437] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.468 [2024-07-11 23:45:56.161849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.162098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.162161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.468 [2024-07-11 23:45:56.162181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.468 [2024-07-11 23:45:56.162400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.468 [2024-07-11 23:45:56.162569] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.468 [2024-07-11 23:45:56.162593] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.468 [2024-07-11 23:45:56.162608] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.468 [2024-07-11 23:45:56.164927] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.468 [2024-07-11 23:45:56.174045] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.468 [2024-07-11 23:45:56.174511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.174767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.174814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.468 [2024-07-11 23:45:56.174831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.468 [2024-07-11 23:45:56.174996] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.468 [2024-07-11 23:45:56.175175] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.468 [2024-07-11 23:45:56.175199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.468 [2024-07-11 23:45:56.175215] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.468 [2024-07-11 23:45:56.177443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.468 [2024-07-11 23:45:56.186570] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.468 [2024-07-11 23:45:56.187024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.187470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.187513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.468 [2024-07-11 23:45:56.187534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.468 [2024-07-11 23:45:56.187687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.468 [2024-07-11 23:45:56.187856] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.468 [2024-07-11 23:45:56.187880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.468 [2024-07-11 23:45:56.187895] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.468 [2024-07-11 23:45:56.190390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.468 [2024-07-11 23:45:56.199241] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.468 [2024-07-11 23:45:56.199713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.199937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.199990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.468 [2024-07-11 23:45:56.200008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.468 [2024-07-11 23:45:56.200183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.468 [2024-07-11 23:45:56.200335] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.468 [2024-07-11 23:45:56.200359] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.468 [2024-07-11 23:45:56.200374] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.468 [2024-07-11 23:45:56.202786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.468 [2024-07-11 23:45:56.211989] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.468 [2024-07-11 23:45:56.212553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.212883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.212935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.468 [2024-07-11 23:45:56.212953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.468 [2024-07-11 23:45:56.213107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.468 [2024-07-11 23:45:56.213309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.468 [2024-07-11 23:45:56.213334] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.468 [2024-07-11 23:45:56.213349] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.468 [2024-07-11 23:45:56.215599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.468 [2024-07-11 23:45:56.224588] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.468 [2024-07-11 23:45:56.225079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.225498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.225542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.468 [2024-07-11 23:45:56.225562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.468 [2024-07-11 23:45:56.225715] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.468 [2024-07-11 23:45:56.225850] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.468 [2024-07-11 23:45:56.225873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.468 [2024-07-11 23:45:56.225888] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.468 [2024-07-11 23:45:56.228287] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.468 [2024-07-11 23:45:56.237358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.468 [2024-07-11 23:45:56.237920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.238269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 23:45:56.238301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.469 [2024-07-11 23:45:56.238319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.469 [2024-07-11 23:45:56.238526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.469 [2024-07-11 23:45:56.238733] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.469 [2024-07-11 23:45:56.238757] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.469 [2024-07-11 23:45:56.238772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.469 [2024-07-11 23:45:56.241060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.469 [2024-07-11 23:45:56.249991] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.469 [2024-07-11 23:45:56.250612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.250926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.250989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.469 [2024-07-11 23:45:56.251008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.469 [2024-07-11 23:45:56.251191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.469 [2024-07-11 23:45:56.251380] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.469 [2024-07-11 23:45:56.251404] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.469 [2024-07-11 23:45:56.251420] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.469 [2024-07-11 23:45:56.253736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.469 [2024-07-11 23:45:56.262574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.469 [2024-07-11 23:45:56.263187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.263418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.263446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.469 [2024-07-11 23:45:56.263464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.469 [2024-07-11 23:45:56.263629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.469 [2024-07-11 23:45:56.263798] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.469 [2024-07-11 23:45:56.263821] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.469 [2024-07-11 23:45:56.263837] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.469 [2024-07-11 23:45:56.265959] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.469 [2024-07-11 23:45:56.275032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.469 [2024-07-11 23:45:56.275385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.275639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.275668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.469 [2024-07-11 23:45:56.275685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.469 [2024-07-11 23:45:56.275836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.469 [2024-07-11 23:45:56.276040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.469 [2024-07-11 23:45:56.276064] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.469 [2024-07-11 23:45:56.276080] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.469 [2024-07-11 23:45:56.278623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.469 [2024-07-11 23:45:56.287402] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.469 [2024-07-11 23:45:56.287838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.288131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.288170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.469 [2024-07-11 23:45:56.288196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.469 [2024-07-11 23:45:56.288380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.469 [2024-07-11 23:45:56.288549] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.469 [2024-07-11 23:45:56.288572] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.469 [2024-07-11 23:45:56.288587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.469 [2024-07-11 23:45:56.290849] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.469 [2024-07-11 23:45:56.299842] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.469 [2024-07-11 23:45:56.300211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.300581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.300625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.469 [2024-07-11 23:45:56.300646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.469 [2024-07-11 23:45:56.300834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.469 [2024-07-11 23:45:56.301005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.469 [2024-07-11 23:45:56.301029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.469 [2024-07-11 23:45:56.301044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.469 [2024-07-11 23:45:56.303428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.469 [2024-07-11 23:45:56.312616] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.469 [2024-07-11 23:45:56.313032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.313255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.313285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.469 [2024-07-11 23:45:56.313303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.469 [2024-07-11 23:45:56.313450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.469 [2024-07-11 23:45:56.313638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.469 [2024-07-11 23:45:56.313662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.469 [2024-07-11 23:45:56.313677] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.469 [2024-07-11 23:45:56.315921] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.469 [2024-07-11 23:45:56.325413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.469 [2024-07-11 23:45:56.325783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.325960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.325988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.469 [2024-07-11 23:45:56.326006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.469 [2024-07-11 23:45:56.326123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.469 [2024-07-11 23:45:56.326286] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.469 [2024-07-11 23:45:56.326310] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.469 [2024-07-11 23:45:56.326325] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.469 [2024-07-11 23:45:56.328713] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.469 [2024-07-11 23:45:56.337822] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.469 [2024-07-11 23:45:56.338216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.338407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 23:45:56.338435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.469 [2024-07-11 23:45:56.338453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.470 [2024-07-11 23:45:56.338581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.470 [2024-07-11 23:45:56.338768] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.470 [2024-07-11 23:45:56.338792] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.470 [2024-07-11 23:45:56.338807] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.470 [2024-07-11 23:45:56.341072] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.470 [2024-07-11 23:45:56.350468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.470 [2024-07-11 23:45:56.350872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 23:45:56.351165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 23:45:56.351194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.470 [2024-07-11 23:45:56.351212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.470 [2024-07-11 23:45:56.351340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.470 [2024-07-11 23:45:56.351491] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.470 [2024-07-11 23:45:56.351514] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.470 [2024-07-11 23:45:56.351529] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.470 [2024-07-11 23:45:56.353802] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.470 [2024-07-11 23:45:56.363100] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.470 [2024-07-11 23:45:56.363542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 23:45:56.363833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 23:45:56.363861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.470 [2024-07-11 23:45:56.363879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.470 [2024-07-11 23:45:56.364025] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.470 [2024-07-11 23:45:56.364211] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.470 [2024-07-11 23:45:56.364235] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.470 [2024-07-11 23:45:56.364250] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.470 [2024-07-11 23:45:56.366640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.470 [2024-07-11 23:45:56.375741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.470 [2024-07-11 23:45:56.376193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 23:45:56.376419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 23:45:56.376448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.470 [2024-07-11 23:45:56.376465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.470 [2024-07-11 23:45:56.376629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.470 [2024-07-11 23:45:56.376815] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.470 [2024-07-11 23:45:56.376838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.470 [2024-07-11 23:45:56.376853] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.470 [2024-07-11 23:45:56.379220] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.470 [2024-07-11 23:45:56.388290] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.470 [2024-07-11 23:45:56.388661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 23:45:56.388895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 23:45:56.388923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.470 [2024-07-11 23:45:56.388941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.470 [2024-07-11 23:45:56.389124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.470 [2024-07-11 23:45:56.389340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.470 [2024-07-11 23:45:56.389364] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.470 [2024-07-11 23:45:56.389379] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.470 [2024-07-11 23:45:56.391677] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.470 [2024-07-11 23:45:56.400741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.470 [2024-07-11 23:45:56.401155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 23:45:56.401341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 23:45:56.401369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.470 [2024-07-11 23:45:56.401387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.470 [2024-07-11 23:45:56.401534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.470 [2024-07-11 23:45:56.401684] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.470 [2024-07-11 23:45:56.401714] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.470 [2024-07-11 23:45:56.401730] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.470 [2024-07-11 23:45:56.404192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.470 [2024-07-11 23:45:56.413079] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.470 [2024-07-11 23:45:56.413461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 23:45:56.413649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 23:45:56.413678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.470 [2024-07-11 23:45:56.413695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.470 [2024-07-11 23:45:56.413879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.470 [2024-07-11 23:45:56.414083] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.470 [2024-07-11 23:45:56.414106] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.470 [2024-07-11 23:45:56.414122] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.470 [2024-07-11 23:45:56.416339] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.732 [2024-07-11 23:45:56.425779] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.732 [2024-07-11 23:45:56.426155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.732 [2024-07-11 23:45:56.426482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.732 [2024-07-11 23:45:56.426526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.732 [2024-07-11 23:45:56.426546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.732 [2024-07-11 23:45:56.426717] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.732 [2024-07-11 23:45:56.426888] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.732 [2024-07-11 23:45:56.426911] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.732 [2024-07-11 23:45:56.426927] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.732 [2024-07-11 23:45:56.429245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.732 [2024-07-11 23:45:56.438551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.732 [2024-07-11 23:45:56.439005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.732 [2024-07-11 23:45:56.439248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.439279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.733 [2024-07-11 23:45:56.439297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.733 [2024-07-11 23:45:56.439498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.733 [2024-07-11 23:45:56.439703] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.733 [2024-07-11 23:45:56.439726] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.733 [2024-07-11 23:45:56.439748] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.733 [2024-07-11 23:45:56.442165] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.733 [2024-07-11 23:45:56.451131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.733 [2024-07-11 23:45:56.451576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.451833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.451861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.733 [2024-07-11 23:45:56.451878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.733 [2024-07-11 23:45:56.452026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.733 [2024-07-11 23:45:56.452190] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.733 [2024-07-11 23:45:56.452215] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.733 [2024-07-11 23:45:56.452230] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.733 [2024-07-11 23:45:56.454692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.733 [2024-07-11 23:45:56.463789] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.733 [2024-07-11 23:45:56.464205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.464413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.464441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.733 [2024-07-11 23:45:56.464458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.733 [2024-07-11 23:45:56.464605] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.733 [2024-07-11 23:45:56.464774] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.733 [2024-07-11 23:45:56.464798] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.733 [2024-07-11 23:45:56.464813] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.733 [2024-07-11 23:45:56.467098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.733 [2024-07-11 23:45:56.476397] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.733 [2024-07-11 23:45:56.476829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.477201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.477230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.733 [2024-07-11 23:45:56.477247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.733 [2024-07-11 23:45:56.477395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.733 [2024-07-11 23:45:56.477549] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.733 [2024-07-11 23:45:56.477572] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.733 [2024-07-11 23:45:56.477587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.733 [2024-07-11 23:45:56.479998] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.733 [2024-07-11 23:45:56.488949] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.733 [2024-07-11 23:45:56.489378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.489619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.489647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.733 [2024-07-11 23:45:56.489664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.733 [2024-07-11 23:45:56.489792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.733 [2024-07-11 23:45:56.489941] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.733 [2024-07-11 23:45:56.489964] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.733 [2024-07-11 23:45:56.489979] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.733 [2024-07-11 23:45:56.492256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.733 [2024-07-11 23:45:56.501661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.733 [2024-07-11 23:45:56.502098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.502288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.502318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.733 [2024-07-11 23:45:56.502336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.733 [2024-07-11 23:45:56.502483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.733 [2024-07-11 23:45:56.502597] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.733 [2024-07-11 23:45:56.502620] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.733 [2024-07-11 23:45:56.502635] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.733 [2024-07-11 23:45:56.504901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.733 [2024-07-11 23:45:56.514172] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.733 [2024-07-11 23:45:56.514541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.514714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.514742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.733 [2024-07-11 23:45:56.514759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.733 [2024-07-11 23:45:56.514924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.733 [2024-07-11 23:45:56.515110] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.733 [2024-07-11 23:45:56.515133] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.733 [2024-07-11 23:45:56.515159] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.733 [2024-07-11 23:45:56.517421] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.733 [2024-07-11 23:45:56.526836] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.733 [2024-07-11 23:45:56.527241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.527422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.527451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.733 [2024-07-11 23:45:56.527468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.733 [2024-07-11 23:45:56.527633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.733 [2024-07-11 23:45:56.527837] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.733 [2024-07-11 23:45:56.527861] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.733 [2024-07-11 23:45:56.527876] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.733 [2024-07-11 23:45:56.530296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.733 [2024-07-11 23:45:56.539669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.733 [2024-07-11 23:45:56.540126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.540310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.540338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.733 [2024-07-11 23:45:56.540355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.733 [2024-07-11 23:45:56.540483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.733 [2024-07-11 23:45:56.540616] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.733 [2024-07-11 23:45:56.540639] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.733 [2024-07-11 23:45:56.540654] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.733 [2024-07-11 23:45:56.543123] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.733 [2024-07-11 23:45:56.552244] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.733 [2024-07-11 23:45:56.552703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.552918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.552946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.733 [2024-07-11 23:45:56.552963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.733 [2024-07-11 23:45:56.553110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.733 [2024-07-11 23:45:56.553350] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.733 [2024-07-11 23:45:56.553374] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.733 [2024-07-11 23:45:56.553389] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.733 [2024-07-11 23:45:56.555748] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.733 [2024-07-11 23:45:56.564804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.733 [2024-07-11 23:45:56.565162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.733 [2024-07-11 23:45:56.565344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.565372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.734 [2024-07-11 23:45:56.565390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.734 [2024-07-11 23:45:56.565626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.734 [2024-07-11 23:45:56.565813] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.734 [2024-07-11 23:45:56.565837] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.734 [2024-07-11 23:45:56.565853] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.734 [2024-07-11 23:45:56.568200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.734 [2024-07-11 23:45:56.577357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.734 [2024-07-11 23:45:56.577727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.577914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.577942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.734 [2024-07-11 23:45:56.577959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.734 [2024-07-11 23:45:56.578106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.734 [2024-07-11 23:45:56.578277] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.734 [2024-07-11 23:45:56.578302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.734 [2024-07-11 23:45:56.578317] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.734 [2024-07-11 23:45:56.580598] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.734 [2024-07-11 23:45:56.589785] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.734 [2024-07-11 23:45:56.590147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.590331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.590356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.734 [2024-07-11 23:45:56.590372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.734 [2024-07-11 23:45:56.590497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.734 [2024-07-11 23:45:56.590679] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.734 [2024-07-11 23:45:56.590699] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.734 [2024-07-11 23:45:56.590711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.734 [2024-07-11 23:45:56.592663] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.734 [2024-07-11 23:45:56.602002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.734 [2024-07-11 23:45:56.602362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.602546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.602574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.734 [2024-07-11 23:45:56.602588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.734 [2024-07-11 23:45:56.602723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.734 [2024-07-11 23:45:56.602817] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.734 [2024-07-11 23:45:56.602835] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.734 [2024-07-11 23:45:56.602847] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.734 [2024-07-11 23:45:56.604811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.734 [2024-07-11 23:45:56.614241] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.734 [2024-07-11 23:45:56.614580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.614781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.614803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.734 [2024-07-11 23:45:56.614818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.734 [2024-07-11 23:45:56.614952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.734 [2024-07-11 23:45:56.615134] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.734 [2024-07-11 23:45:56.615164] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.734 [2024-07-11 23:45:56.615178] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.734 [2024-07-11 23:45:56.617191] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.734 [2024-07-11 23:45:56.626610] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.734 [2024-07-11 23:45:56.627033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.627243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.627268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.734 [2024-07-11 23:45:56.627283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.734 [2024-07-11 23:45:56.627422] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.734 [2024-07-11 23:45:56.627605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.734 [2024-07-11 23:45:56.627624] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.734 [2024-07-11 23:45:56.627637] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.734 [2024-07-11 23:45:56.629553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.734 [2024-07-11 23:45:56.638635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.734 [2024-07-11 23:45:56.639035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.639411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.639438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.734 [2024-07-11 23:45:56.639473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.734 [2024-07-11 23:45:56.639625] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.734 [2024-07-11 23:45:56.639777] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.734 [2024-07-11 23:45:56.639796] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.734 [2024-07-11 23:45:56.639808] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.734 [2024-07-11 23:45:56.641737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.734 [2024-07-11 23:45:56.650696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.734 [2024-07-11 23:45:56.651201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.651524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.651548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.734 [2024-07-11 23:45:56.651562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.734 [2024-07-11 23:45:56.651712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.734 [2024-07-11 23:45:56.651865] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.734 [2024-07-11 23:45:56.651884] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.734 [2024-07-11 23:45:56.651896] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.734 [2024-07-11 23:45:56.653788] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.734 [2024-07-11 23:45:56.662812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.734 [2024-07-11 23:45:56.663241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.663581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.663606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.734 [2024-07-11 23:45:56.663620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.734 [2024-07-11 23:45:56.663755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.734 [2024-07-11 23:45:56.663908] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.734 [2024-07-11 23:45:56.663927] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.734 [2024-07-11 23:45:56.663939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.734 [2024-07-11 23:45:56.665939] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.734 [2024-07-11 23:45:56.674943] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.734 [2024-07-11 23:45:56.675489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.675750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.734 [2024-07-11 23:45:56.675773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.734 [2024-07-11 23:45:56.675787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.734 [2024-07-11 23:45:56.675897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.734 [2024-07-11 23:45:56.676037] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.734 [2024-07-11 23:45:56.676056] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.734 [2024-07-11 23:45:56.676069] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.734 [2024-07-11 23:45:56.678018] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.997 [2024-07-11 23:45:56.687060] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.997 [2024-07-11 23:45:56.687664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.687925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.687951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.997 [2024-07-11 23:45:56.687966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.997 [2024-07-11 23:45:56.688091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.997 [2024-07-11 23:45:56.688293] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.997 [2024-07-11 23:45:56.688315] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.997 [2024-07-11 23:45:56.688329] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.997 [2024-07-11 23:45:56.690396] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.997 [2024-07-11 23:45:56.699364] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.997 [2024-07-11 23:45:56.699826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.700049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.700072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.997 [2024-07-11 23:45:56.700086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.997 [2024-07-11 23:45:56.700261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.997 [2024-07-11 23:45:56.700362] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.997 [2024-07-11 23:45:56.700382] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.997 [2024-07-11 23:45:56.700395] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.997 [2024-07-11 23:45:56.702492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.997 [2024-07-11 23:45:56.711707] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.997 [2024-07-11 23:45:56.712058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.712344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.712370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.997 [2024-07-11 23:45:56.712385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.997 [2024-07-11 23:45:56.712524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.997 [2024-07-11 23:45:56.712654] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.997 [2024-07-11 23:45:56.712673] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.997 [2024-07-11 23:45:56.712686] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.997 [2024-07-11 23:45:56.714704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.997 [2024-07-11 23:45:56.723863] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.997 [2024-07-11 23:45:56.724336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.724702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.724725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.997 [2024-07-11 23:45:56.724739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.997 [2024-07-11 23:45:56.724889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.997 [2024-07-11 23:45:56.725012] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.997 [2024-07-11 23:45:56.725031] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.997 [2024-07-11 23:45:56.725044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.997 [2024-07-11 23:45:56.726952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.997 [2024-07-11 23:45:56.736093] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.997 [2024-07-11 23:45:56.736595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.736839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.736862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.997 [2024-07-11 23:45:56.736877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.997 [2024-07-11 23:45:56.737041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.997 [2024-07-11 23:45:56.737254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.997 [2024-07-11 23:45:56.737276] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.997 [2024-07-11 23:45:56.737289] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.997 [2024-07-11 23:45:56.739230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.997 [2024-07-11 23:45:56.748496] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.997 [2024-07-11 23:45:56.748941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.749274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.749299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.997 [2024-07-11 23:45:56.749314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.997 [2024-07-11 23:45:56.749487] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.997 [2024-07-11 23:45:56.749656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.997 [2024-07-11 23:45:56.749681] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.997 [2024-07-11 23:45:56.749694] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.997 [2024-07-11 23:45:56.751684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.997 [2024-07-11 23:45:56.760780] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.997 [2024-07-11 23:45:56.761247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.761647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.761696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.997 [2024-07-11 23:45:56.761714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.997 [2024-07-11 23:45:56.761809] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.997 [2024-07-11 23:45:56.761962] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.997 [2024-07-11 23:45:56.761982] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.997 [2024-07-11 23:45:56.761994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.997 [2024-07-11 23:45:56.763949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.997 [2024-07-11 23:45:56.772918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.997 [2024-07-11 23:45:56.773373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.773677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.773701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.997 [2024-07-11 23:45:56.773715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.997 [2024-07-11 23:45:56.773835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.997 [2024-07-11 23:45:56.773988] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.997 [2024-07-11 23:45:56.774007] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.997 [2024-07-11 23:45:56.774019] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.997 [2024-07-11 23:45:56.775916] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.997 [2024-07-11 23:45:56.785222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.997 [2024-07-11 23:45:56.785778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.786074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.786099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.997 [2024-07-11 23:45:56.786114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.997 [2024-07-11 23:45:56.786329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.997 [2024-07-11 23:45:56.786499] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.997 [2024-07-11 23:45:56.786520] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.997 [2024-07-11 23:45:56.786539] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.997 [2024-07-11 23:45:56.788510] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.997 [2024-07-11 23:45:56.797268] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.997 [2024-07-11 23:45:56.797654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.797861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.997 [2024-07-11 23:45:56.797884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.997 [2024-07-11 23:45:56.797899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.997 [2024-07-11 23:45:56.798019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.997 [2024-07-11 23:45:56.798168] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.998 [2024-07-11 23:45:56.798189] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.998 [2024-07-11 23:45:56.798202] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.998 [2024-07-11 23:45:56.800017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.998 [2024-07-11 23:45:56.809547] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.998 [2024-07-11 23:45:56.809998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.810316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.810340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.998 [2024-07-11 23:45:56.810355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.998 [2024-07-11 23:45:56.810477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.998 [2024-07-11 23:45:56.810615] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.998 [2024-07-11 23:45:56.810635] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.998 [2024-07-11 23:45:56.810647] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.998 [2024-07-11 23:45:56.812627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.998 [2024-07-11 23:45:56.821644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.998 [2024-07-11 23:45:56.822158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.822411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.822435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.998 [2024-07-11 23:45:56.822449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.998 [2024-07-11 23:45:56.822601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.998 [2024-07-11 23:45:56.822724] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.998 [2024-07-11 23:45:56.822744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.998 [2024-07-11 23:45:56.822756] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.998 [2024-07-11 23:45:56.824700] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.998 [2024-07-11 23:45:56.833816] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.998 [2024-07-11 23:45:56.834250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.834654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.834703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.998 [2024-07-11 23:45:56.834720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.998 [2024-07-11 23:45:56.834845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.998 [2024-07-11 23:45:56.834999] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.998 [2024-07-11 23:45:56.835019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.998 [2024-07-11 23:45:56.835032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.998 [2024-07-11 23:45:56.837059] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.998 [2024-07-11 23:45:56.845867] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.998 [2024-07-11 23:45:56.846309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.846693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.846730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.998 [2024-07-11 23:45:56.846745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.998 [2024-07-11 23:45:56.846923] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.998 [2024-07-11 23:45:56.847061] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.998 [2024-07-11 23:45:56.847081] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.998 [2024-07-11 23:45:56.847093] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.998 [2024-07-11 23:45:56.849101] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.998 [2024-07-11 23:45:56.858183] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.998 [2024-07-11 23:45:56.858613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.858865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.858888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.998 [2024-07-11 23:45:56.858902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.998 [2024-07-11 23:45:56.859008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.998 [2024-07-11 23:45:56.859156] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.998 [2024-07-11 23:45:56.859177] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.998 [2024-07-11 23:45:56.859205] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.998 [2024-07-11 23:45:56.861200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.998 [2024-07-11 23:45:56.870370] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.998 [2024-07-11 23:45:56.870850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.871153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.871177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.998 [2024-07-11 23:45:56.871207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.998 [2024-07-11 23:45:56.871351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.998 [2024-07-11 23:45:56.871524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.998 [2024-07-11 23:45:56.871544] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.998 [2024-07-11 23:45:56.871556] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.998 [2024-07-11 23:45:56.873554] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.998 [2024-07-11 23:45:56.882616] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.998 [2024-07-11 23:45:56.883098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.883395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.883420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.998 [2024-07-11 23:45:56.883435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.998 [2024-07-11 23:45:56.883620] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.998 [2024-07-11 23:45:56.883713] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.998 [2024-07-11 23:45:56.883732] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.998 [2024-07-11 23:45:56.883745] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.998 [2024-07-11 23:45:56.885700] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.998 [2024-07-11 23:45:56.894613] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.998 [2024-07-11 23:45:56.895192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.895488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.895513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.998 [2024-07-11 23:45:56.895528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.998 [2024-07-11 23:45:56.895653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.998 [2024-07-11 23:45:56.895807] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.998 [2024-07-11 23:45:56.895826] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.998 [2024-07-11 23:45:56.895839] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.998 [2024-07-11 23:45:56.897757] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.998 [2024-07-11 23:45:56.906831] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.998 [2024-07-11 23:45:56.907277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.907573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.907596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.998 [2024-07-11 23:45:56.907611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.998 [2024-07-11 23:45:56.907731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.998 [2024-07-11 23:45:56.907839] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.998 [2024-07-11 23:45:56.907858] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.998 [2024-07-11 23:45:56.907870] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.998 [2024-07-11 23:45:56.909914] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.998 [2024-07-11 23:45:56.919059] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.998 [2024-07-11 23:45:56.919539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.919812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.998 [2024-07-11 23:45:56.919835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.998 [2024-07-11 23:45:56.919849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.998 [2024-07-11 23:45:56.919998] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.998 [2024-07-11 23:45:56.920175] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.999 [2024-07-11 23:45:56.920195] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.999 [2024-07-11 23:45:56.920209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.999 [2024-07-11 23:45:56.922239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.999 [2024-07-11 23:45:56.931270] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.999 [2024-07-11 23:45:56.931688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.999 [2024-07-11 23:45:56.931953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.999 [2024-07-11 23:45:56.931976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.999 [2024-07-11 23:45:56.931990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.999 [2024-07-11 23:45:56.932196] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.999 [2024-07-11 23:45:56.932343] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.999 [2024-07-11 23:45:56.932364] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.999 [2024-07-11 23:45:56.932377] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:35.999 [2024-07-11 23:45:56.934095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.999 [2024-07-11 23:45:56.943416] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:35.999 [2024-07-11 23:45:56.943808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.999 [2024-07-11 23:45:56.944053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.999 [2024-07-11 23:45:56.944081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:35.999 [2024-07-11 23:45:56.944095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:35.999 [2024-07-11 23:45:56.944308] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:35.999 [2024-07-11 23:45:56.944490] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:35.999 [2024-07-11 23:45:56.944511] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:35.999 [2024-07-11 23:45:56.944524] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.261 [2024-07-11 23:45:56.946519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.261 [2024-07-11 23:45:56.955507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.261 [2024-07-11 23:45:56.955939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.261 [2024-07-11 23:45:56.956291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.261 [2024-07-11 23:45:56.956329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.261 [2024-07-11 23:45:56.956344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.261 [2024-07-11 23:45:56.956527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.261 [2024-07-11 23:45:56.956665] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.261 [2024-07-11 23:45:56.956684] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.261 [2024-07-11 23:45:56.956697] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.261 [2024-07-11 23:45:56.958621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.261 [2024-07-11 23:45:56.967710] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.261 [2024-07-11 23:45:56.968134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.261 [2024-07-11 23:45:56.968392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.261 [2024-07-11 23:45:56.968416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.261 [2024-07-11 23:45:56.968430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.261 [2024-07-11 23:45:56.968595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.261 [2024-07-11 23:45:56.968748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.261 [2024-07-11 23:45:56.968767] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.261 [2024-07-11 23:45:56.968779] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.261 [2024-07-11 23:45:56.970758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.261 [2024-07-11 23:45:56.979882] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.261 [2024-07-11 23:45:56.980345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.261 [2024-07-11 23:45:56.980632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:56.980655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.262 [2024-07-11 23:45:56.980674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.262 [2024-07-11 23:45:56.980809] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.262 [2024-07-11 23:45:56.980932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.262 [2024-07-11 23:45:56.980951] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.262 [2024-07-11 23:45:56.980964] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.262 [2024-07-11 23:45:56.982863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.262 [2024-07-11 23:45:56.992220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.262 [2024-07-11 23:45:56.992692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:56.993005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:56.993030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.262 [2024-07-11 23:45:56.993045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.262 [2024-07-11 23:45:56.993215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.262 [2024-07-11 23:45:56.993348] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.262 [2024-07-11 23:45:56.993369] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.262 [2024-07-11 23:45:56.993383] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.262 [2024-07-11 23:45:56.995269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.262 [2024-07-11 23:45:57.004443] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.262 [2024-07-11 23:45:57.004895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.005160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.005199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.262 [2024-07-11 23:45:57.005215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.262 [2024-07-11 23:45:57.005390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.262 [2024-07-11 23:45:57.005563] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.262 [2024-07-11 23:45:57.005582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.262 [2024-07-11 23:45:57.005595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.262 [2024-07-11 23:45:57.007554] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.262 [2024-07-11 23:45:57.016602] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.262 [2024-07-11 23:45:57.017164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.017615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.017667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.262 [2024-07-11 23:45:57.017684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.262 [2024-07-11 23:45:57.017860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.262 [2024-07-11 23:45:57.017970] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.262 [2024-07-11 23:45:57.017989] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.262 [2024-07-11 23:45:57.018002] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.262 [2024-07-11 23:45:57.020002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.262 [2024-07-11 23:45:57.028692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.262 [2024-07-11 23:45:57.029188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.029462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.029487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.262 [2024-07-11 23:45:57.029502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.262 [2024-07-11 23:45:57.029598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.262 [2024-07-11 23:45:57.029730] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.262 [2024-07-11 23:45:57.029749] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.262 [2024-07-11 23:45:57.029762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.262 [2024-07-11 23:45:57.031715] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.262 [2024-07-11 23:45:57.040929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.262 [2024-07-11 23:45:57.041427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.041713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.041736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.262 [2024-07-11 23:45:57.041750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.262 [2024-07-11 23:45:57.041885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.262 [2024-07-11 23:45:57.042038] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.262 [2024-07-11 23:45:57.042057] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.262 [2024-07-11 23:45:57.042070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.262 [2024-07-11 23:45:57.044069] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.262 [2024-07-11 23:45:57.053056] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.262 [2024-07-11 23:45:57.053490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.053748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.053771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.262 [2024-07-11 23:45:57.053785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.262 [2024-07-11 23:45:57.053934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.262 [2024-07-11 23:45:57.054048] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.262 [2024-07-11 23:45:57.054068] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.262 [2024-07-11 23:45:57.054080] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.262 [2024-07-11 23:45:57.056017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.262 [2024-07-11 23:45:57.065406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.262 [2024-07-11 23:45:57.065807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.066042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.066064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.262 [2024-07-11 23:45:57.066078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.262 [2024-07-11 23:45:57.066257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.262 [2024-07-11 23:45:57.066450] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.262 [2024-07-11 23:45:57.066470] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.262 [2024-07-11 23:45:57.066483] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.262 [2024-07-11 23:45:57.068371] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.262 [2024-07-11 23:45:57.077390] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.262 [2024-07-11 23:45:57.077809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.078155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.078194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.262 [2024-07-11 23:45:57.078210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.262 [2024-07-11 23:45:57.078353] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.262 [2024-07-11 23:45:57.078557] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.262 [2024-07-11 23:45:57.078577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.262 [2024-07-11 23:45:57.078590] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.262 [2024-07-11 23:45:57.080506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.262 [2024-07-11 23:45:57.089440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.262 [2024-07-11 23:45:57.089991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.090393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.090419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.262 [2024-07-11 23:45:57.090435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.262 [2024-07-11 23:45:57.090608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.262 [2024-07-11 23:45:57.090718] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.262 [2024-07-11 23:45:57.090742] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.262 [2024-07-11 23:45:57.090756] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.262 [2024-07-11 23:45:57.092845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.262 [2024-07-11 23:45:57.101607] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.262 [2024-07-11 23:45:57.102086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.262 [2024-07-11 23:45:57.102414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.102441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.263 [2024-07-11 23:45:57.102456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.263 [2024-07-11 23:45:57.102594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.263 [2024-07-11 23:45:57.102746] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.263 [2024-07-11 23:45:57.102766] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.263 [2024-07-11 23:45:57.102778] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.263 [2024-07-11 23:45:57.104649] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.263 [2024-07-11 23:45:57.113718] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.263 [2024-07-11 23:45:57.114096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.114379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.114404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.263 [2024-07-11 23:45:57.114419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.263 [2024-07-11 23:45:57.114605] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.263 [2024-07-11 23:45:57.114757] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.263 [2024-07-11 23:45:57.114777] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.263 [2024-07-11 23:45:57.114789] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.263 [2024-07-11 23:45:57.116694] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.263 [2024-07-11 23:45:57.125878] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.263 [2024-07-11 23:45:57.126287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.126720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.126769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.263 [2024-07-11 23:45:57.126786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.263 [2024-07-11 23:45:57.126939] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.263 [2024-07-11 23:45:57.127064] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.263 [2024-07-11 23:45:57.127084] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.263 [2024-07-11 23:45:57.127107] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.263 [2024-07-11 23:45:57.129155] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.263 [2024-07-11 23:45:57.138183] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.263 [2024-07-11 23:45:57.138546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.138802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.138825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.263 [2024-07-11 23:45:57.138839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.263 [2024-07-11 23:45:57.138950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.263 [2024-07-11 23:45:57.139074] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.263 [2024-07-11 23:45:57.139093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.263 [2024-07-11 23:45:57.139105] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.263 [2024-07-11 23:45:57.141040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.263 [2024-07-11 23:45:57.150405] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.263 [2024-07-11 23:45:57.150858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.151134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.151167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.263 [2024-07-11 23:45:57.151196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.263 [2024-07-11 23:45:57.151372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.263 [2024-07-11 23:45:57.151531] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.263 [2024-07-11 23:45:57.151550] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.263 [2024-07-11 23:45:57.151563] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.263 [2024-07-11 23:45:57.153608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.263 [2024-07-11 23:45:57.162645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.263 [2024-07-11 23:45:57.163076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.163411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.163436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.263 [2024-07-11 23:45:57.163452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.263 [2024-07-11 23:45:57.163574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.263 [2024-07-11 23:45:57.163711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.263 [2024-07-11 23:45:57.163731] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.263 [2024-07-11 23:45:57.163743] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.263 [2024-07-11 23:45:57.165706] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.263 [2024-07-11 23:45:57.174803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.263 [2024-07-11 23:45:57.175277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.175536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.175559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.263 [2024-07-11 23:45:57.175573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.263 [2024-07-11 23:45:57.175708] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.263 [2024-07-11 23:45:57.175831] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.263 [2024-07-11 23:45:57.175850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.263 [2024-07-11 23:45:57.175862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.263 [2024-07-11 23:45:57.177819] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.263 [2024-07-11 23:45:57.186863] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.263 [2024-07-11 23:45:57.187318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.187580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.187603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.263 [2024-07-11 23:45:57.187617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.263 [2024-07-11 23:45:57.187781] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.263 [2024-07-11 23:45:57.187889] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.263 [2024-07-11 23:45:57.187908] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.263 [2024-07-11 23:45:57.187920] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.263 [2024-07-11 23:45:57.189955] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.263 [2024-07-11 23:45:57.199042] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.263 [2024-07-11 23:45:57.199499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.199759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.263 [2024-07-11 23:45:57.199782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.263 [2024-07-11 23:45:57.199796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.263 [2024-07-11 23:45:57.199945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.263 [2024-07-11 23:45:57.200054] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.263 [2024-07-11 23:45:57.200073] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.263 [2024-07-11 23:45:57.200085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.263 [2024-07-11 23:45:57.202401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.523 [2024-07-11 23:45:57.211618] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.523 [2024-07-11 23:45:57.212056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.523 [2024-07-11 23:45:57.212300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.523 [2024-07-11 23:45:57.212330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.523 [2024-07-11 23:45:57.212347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.523 [2024-07-11 23:45:57.212494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.523 [2024-07-11 23:45:57.212662] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.523 [2024-07-11 23:45:57.212685] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.523 [2024-07-11 23:45:57.212701] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.523 [2024-07-11 23:45:57.214854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.523 [2024-07-11 23:45:57.224222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.523 [2024-07-11 23:45:57.224696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.523 [2024-07-11 23:45:57.224993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.523 [2024-07-11 23:45:57.225041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.523 [2024-07-11 23:45:57.225059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.523 [2024-07-11 23:45:57.225198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.523 [2024-07-11 23:45:57.225349] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.523 [2024-07-11 23:45:57.225372] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.523 [2024-07-11 23:45:57.225387] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.523 [2024-07-11 23:45:57.227658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.523 [2024-07-11 23:45:57.236867] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.523 [2024-07-11 23:45:57.237350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.237660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.237710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.524 [2024-07-11 23:45:57.237728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.524 [2024-07-11 23:45:57.237928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.524 [2024-07-11 23:45:57.238097] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.524 [2024-07-11 23:45:57.238120] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.524 [2024-07-11 23:45:57.238135] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.524 [2024-07-11 23:45:57.240516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.524 [2024-07-11 23:45:57.249379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.524 [2024-07-11 23:45:57.250025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.250328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.250361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.524 [2024-07-11 23:45:57.250380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.524 [2024-07-11 23:45:57.250514] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.524 [2024-07-11 23:45:57.250685] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.524 [2024-07-11 23:45:57.250708] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.524 [2024-07-11 23:45:57.250724] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.524 [2024-07-11 23:45:57.253065] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.524 [2024-07-11 23:45:57.261831] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.524 [2024-07-11 23:45:57.262243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.262576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.262626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.524 [2024-07-11 23:45:57.262643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.524 [2024-07-11 23:45:57.262790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.524 [2024-07-11 23:45:57.262942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.524 [2024-07-11 23:45:57.262965] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.524 [2024-07-11 23:45:57.262980] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.524 [2024-07-11 23:45:57.265491] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.524 [2024-07-11 23:45:57.274504] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.524 [2024-07-11 23:45:57.275046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.275339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.275371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.524 [2024-07-11 23:45:57.275390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.524 [2024-07-11 23:45:57.275578] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.524 [2024-07-11 23:45:57.275786] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.524 [2024-07-11 23:45:57.275809] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.524 [2024-07-11 23:45:57.275825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.524 [2024-07-11 23:45:57.278173] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.524 [2024-07-11 23:45:57.286973] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.524 [2024-07-11 23:45:57.287570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.287916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.287975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.524 [2024-07-11 23:45:57.287994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.524 [2024-07-11 23:45:57.288163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.524 [2024-07-11 23:45:57.288351] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.524 [2024-07-11 23:45:57.288375] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.524 [2024-07-11 23:45:57.288391] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.524 [2024-07-11 23:45:57.290728] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.524 [2024-07-11 23:45:57.299485] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.524 [2024-07-11 23:45:57.299890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.300116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.300155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.524 [2024-07-11 23:45:57.300176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.524 [2024-07-11 23:45:57.300378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.524 [2024-07-11 23:45:57.300511] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.524 [2024-07-11 23:45:57.300534] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.524 [2024-07-11 23:45:57.300549] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.524 [2024-07-11 23:45:57.302955] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.524 [2024-07-11 23:45:57.312074] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.524 [2024-07-11 23:45:57.312635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.312948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.312997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.524 [2024-07-11 23:45:57.313016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.524 [2024-07-11 23:45:57.313163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.524 [2024-07-11 23:45:57.313335] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.524 [2024-07-11 23:45:57.313359] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.524 [2024-07-11 23:45:57.313374] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.524 [2024-07-11 23:45:57.315709] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.524 [2024-07-11 23:45:57.324542] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.524 [2024-07-11 23:45:57.325037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.325331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.325360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.524 [2024-07-11 23:45:57.325384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.524 [2024-07-11 23:45:57.325568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.524 [2024-07-11 23:45:57.325719] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.524 [2024-07-11 23:45:57.325743] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.524 [2024-07-11 23:45:57.325758] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.524 [2024-07-11 23:45:57.328123] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.524 [2024-07-11 23:45:57.337180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.524 [2024-07-11 23:45:57.337591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.337857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.524 [2024-07-11 23:45:57.337908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.524 [2024-07-11 23:45:57.337926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.524 [2024-07-11 23:45:57.338108] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.524 [2024-07-11 23:45:57.338289] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.524 [2024-07-11 23:45:57.338325] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.524 [2024-07-11 23:45:57.338341] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.524 [2024-07-11 23:45:57.340589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.525 [2024-07-11 23:45:57.349748] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.525 [2024-07-11 23:45:57.350247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.350613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.350664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.525 [2024-07-11 23:45:57.350684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.525 [2024-07-11 23:45:57.350837] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.525 [2024-07-11 23:45:57.350989] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.525 [2024-07-11 23:45:57.351012] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.525 [2024-07-11 23:45:57.351027] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.525 [2024-07-11 23:45:57.353288] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.525 [2024-07-11 23:45:57.362445] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.525 [2024-07-11 23:45:57.363058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.363352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.363385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.525 [2024-07-11 23:45:57.363403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.525 [2024-07-11 23:45:57.363562] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.525 [2024-07-11 23:45:57.363697] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.525 [2024-07-11 23:45:57.363721] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.525 [2024-07-11 23:45:57.363736] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.525 [2024-07-11 23:45:57.365966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.525 [2024-07-11 23:45:57.375101] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.525 [2024-07-11 23:45:57.375625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.376010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.376062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.525 [2024-07-11 23:45:57.376080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.525 [2024-07-11 23:45:57.376266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.525 [2024-07-11 23:45:57.376401] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.525 [2024-07-11 23:45:57.376425] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.525 [2024-07-11 23:45:57.376440] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.525 [2024-07-11 23:45:57.378725] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.525 [2024-07-11 23:45:57.387608] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.525 [2024-07-11 23:45:57.388062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.388345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.388375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.525 [2024-07-11 23:45:57.388392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.525 [2024-07-11 23:45:57.388523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.525 [2024-07-11 23:45:57.388746] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.525 [2024-07-11 23:45:57.388769] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.525 [2024-07-11 23:45:57.388784] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.525 [2024-07-11 23:45:57.390997] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.525 [2024-07-11 23:45:57.400277] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.525 [2024-07-11 23:45:57.400691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.401004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.401053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.525 [2024-07-11 23:45:57.401070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.525 [2024-07-11 23:45:57.401230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.525 [2024-07-11 23:45:57.401424] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.525 [2024-07-11 23:45:57.401447] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.525 [2024-07-11 23:45:57.401462] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.525 [2024-07-11 23:45:57.403689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.525 [2024-07-11 23:45:57.412709] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.525 [2024-07-11 23:45:57.413326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.413634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.413682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.525 [2024-07-11 23:45:57.413700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.525 [2024-07-11 23:45:57.413835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.525 [2024-07-11 23:45:57.414006] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.525 [2024-07-11 23:45:57.414029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.525 [2024-07-11 23:45:57.414044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.525 [2024-07-11 23:45:57.416213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.525 [2024-07-11 23:45:57.425200] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.525 [2024-07-11 23:45:57.425731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.426129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.426230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.525 [2024-07-11 23:45:57.426250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.525 [2024-07-11 23:45:57.426404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.525 [2024-07-11 23:45:57.426591] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.525 [2024-07-11 23:45:57.426614] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.525 [2024-07-11 23:45:57.426630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.525 [2024-07-11 23:45:57.429060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.525 [2024-07-11 23:45:57.437792] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.525 [2024-07-11 23:45:57.438358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.438615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.438667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.525 [2024-07-11 23:45:57.438684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.525 [2024-07-11 23:45:57.438868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.525 [2024-07-11 23:45:57.439035] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.525 [2024-07-11 23:45:57.439063] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.525 [2024-07-11 23:45:57.439078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.525 [2024-07-11 23:45:57.441519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.525 [2024-07-11 23:45:57.450351] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.525 [2024-07-11 23:45:57.450981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.451294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.451326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.525 [2024-07-11 23:45:57.451344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.525 [2024-07-11 23:45:57.451534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.525 [2024-07-11 23:45:57.451704] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.525 [2024-07-11 23:45:57.451729] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.525 [2024-07-11 23:45:57.451744] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.525 [2024-07-11 23:45:57.454166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.525 [2024-07-11 23:45:57.462732] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.525 [2024-07-11 23:45:57.463199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.463396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.525 [2024-07-11 23:45:57.463446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.525 [2024-07-11 23:45:57.463464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.526 [2024-07-11 23:45:57.463630] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.526 [2024-07-11 23:45:57.463799] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.526 [2024-07-11 23:45:57.463823] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.526 [2024-07-11 23:45:57.463838] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.526 [2024-07-11 23:45:57.466204] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 [2024-07-11 23:45:57.475318] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.787 [2024-07-11 23:45:57.475677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-07-11 23:45:57.475908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-07-11 23:45:57.475959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-07-11 23:45:57.475976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.787 [2024-07-11 23:45:57.476122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.787 [2024-07-11 23:45:57.476284] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.787 [2024-07-11 23:45:57.476308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.787 [2024-07-11 23:45:57.476330] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.787 [2024-07-11 23:45:57.478612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 [2024-07-11 23:45:57.487971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.787 [2024-07-11 23:45:57.488364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-07-11 23:45:57.488587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-07-11 23:45:57.488637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-07-11 23:45:57.488654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.787 [2024-07-11 23:45:57.488819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.787 [2024-07-11 23:45:57.489004] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.787 [2024-07-11 23:45:57.489028] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.787 [2024-07-11 23:45:57.489044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.787 [2024-07-11 23:45:57.491299] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 [2024-07-11 23:45:57.500563] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.787 [2024-07-11 23:45:57.501030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-07-11 23:45:57.501431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-07-11 23:45:57.501492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-07-11 23:45:57.501512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.787 [2024-07-11 23:45:57.501647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.787 [2024-07-11 23:45:57.501834] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.787 [2024-07-11 23:45:57.501858] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.787 [2024-07-11 23:45:57.501874] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.787 [2024-07-11 23:45:57.504334] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 [2024-07-11 23:45:57.513061] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.787 [2024-07-11 23:45:57.513538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-07-11 23:45:57.513786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-07-11 23:45:57.513835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-07-11 23:45:57.513852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.787 [2024-07-11 23:45:57.514017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.787 [2024-07-11 23:45:57.514215] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.787 [2024-07-11 23:45:57.514240] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.787 [2024-07-11 23:45:57.514255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.787 [2024-07-11 23:45:57.516789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 [2024-07-11 23:45:57.525816] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.787 [2024-07-11 23:45:57.526244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-07-11 23:45:57.526532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-07-11 23:45:57.526580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-07-11 23:45:57.526597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.787 [2024-07-11 23:45:57.526780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.787 [2024-07-11 23:45:57.526895] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.787 [2024-07-11 23:45:57.526917] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.787 [2024-07-11 23:45:57.526933] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.787 [2024-07-11 23:45:57.529105] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 [2024-07-11 23:45:57.538461] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.787 [2024-07-11 23:45:57.538870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-07-11 23:45:57.539043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-07-11 23:45:57.539071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-07-11 23:45:57.539089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.787 [2024-07-11 23:45:57.539245] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.787 [2024-07-11 23:45:57.539379] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.787 [2024-07-11 23:45:57.539402] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.787 [2024-07-11 23:45:57.539417] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.787 [2024-07-11 23:45:57.541989] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 [2024-07-11 23:45:57.550900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.787 [2024-07-11 23:45:57.551202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-07-11 23:45:57.551424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-07-11 23:45:57.551488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-07-11 23:45:57.551506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.787 [2024-07-11 23:45:57.551688] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.787 [2024-07-11 23:45:57.551839] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.787 [2024-07-11 23:45:57.551862] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.787 [2024-07-11 23:45:57.551877] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.787 [2024-07-11 23:45:57.554269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 [2024-07-11 23:45:57.563414] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.787 [2024-07-11 23:45:57.563845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.564045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.564073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.788 [2024-07-11 23:45:57.564090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.788 [2024-07-11 23:45:57.564245] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.788 [2024-07-11 23:45:57.564396] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.788 [2024-07-11 23:45:57.564420] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.788 [2024-07-11 23:45:57.564435] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.788 [2024-07-11 23:45:57.566929] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.788 [2024-07-11 23:45:57.575890] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.788 [2024-07-11 23:45:57.576264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.576507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.576559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.788 [2024-07-11 23:45:57.576576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.788 [2024-07-11 23:45:57.576758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.788 [2024-07-11 23:45:57.576908] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.788 [2024-07-11 23:45:57.576932] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.788 [2024-07-11 23:45:57.576947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.788 [2024-07-11 23:45:57.579314] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.788 [2024-07-11 23:45:57.588539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.788 [2024-07-11 23:45:57.588991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.589338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.589367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.788 [2024-07-11 23:45:57.589384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.788 [2024-07-11 23:45:57.589531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.788 [2024-07-11 23:45:57.589699] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.788 [2024-07-11 23:45:57.589722] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.788 [2024-07-11 23:45:57.589737] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.788 [2024-07-11 23:45:57.592109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.788 [2024-07-11 23:45:57.601144] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.788 [2024-07-11 23:45:57.601681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.601977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.602026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.788 [2024-07-11 23:45:57.602042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.788 [2024-07-11 23:45:57.602216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.788 [2024-07-11 23:45:57.602403] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.788 [2024-07-11 23:45:57.602426] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.788 [2024-07-11 23:45:57.602441] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.788 [2024-07-11 23:45:57.604992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.788 [2024-07-11 23:45:57.613910] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.788 [2024-07-11 23:45:57.614456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.614765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.614819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.788 [2024-07-11 23:45:57.614837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.788 [2024-07-11 23:45:57.615045] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.788 [2024-07-11 23:45:57.615226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.788 [2024-07-11 23:45:57.615251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.788 [2024-07-11 23:45:57.615267] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.788 [2024-07-11 23:45:57.617531] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.788 [2024-07-11 23:45:57.626395] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.788 [2024-07-11 23:45:57.626821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.627096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.627124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.788 [2024-07-11 23:45:57.627151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.788 [2024-07-11 23:45:57.627337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.788 [2024-07-11 23:45:57.627488] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.788 [2024-07-11 23:45:57.627511] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.788 [2024-07-11 23:45:57.627526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.788 [2024-07-11 23:45:57.629842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.788 [2024-07-11 23:45:57.638930] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.788 [2024-07-11 23:45:57.639285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.639513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.639569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.788 [2024-07-11 23:45:57.639587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.788 [2024-07-11 23:45:57.639752] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.788 [2024-07-11 23:45:57.639939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.788 [2024-07-11 23:45:57.639963] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.788 [2024-07-11 23:45:57.639978] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.788 [2024-07-11 23:45:57.642095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.788 [2024-07-11 23:45:57.651621] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.788 [2024-07-11 23:45:57.652073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.652320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.652349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.788 [2024-07-11 23:45:57.652366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.788 [2024-07-11 23:45:57.652531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.788 [2024-07-11 23:45:57.652700] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.788 [2024-07-11 23:45:57.652724] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.788 [2024-07-11 23:45:57.652739] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.788 [2024-07-11 23:45:57.655056] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.788 [2024-07-11 23:45:57.664039] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.788 [2024-07-11 23:45:57.664378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.664668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.664717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.788 [2024-07-11 23:45:57.664735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.788 [2024-07-11 23:45:57.664899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.788 [2024-07-11 23:45:57.665069] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.788 [2024-07-11 23:45:57.665093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.788 [2024-07-11 23:45:57.665108] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.788 [2024-07-11 23:45:57.667523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.788 [2024-07-11 23:45:57.676646] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.788 [2024-07-11 23:45:57.677017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.677221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-07-11 23:45:57.677250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.788 [2024-07-11 23:45:57.677274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.788 [2024-07-11 23:45:57.677422] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.788 [2024-07-11 23:45:57.677573] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.788 [2024-07-11 23:45:57.677596] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.788 [2024-07-11 23:45:57.677611] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.788 [2024-07-11 23:45:57.680039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.788 [2024-07-11 23:45:57.689176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.789 [2024-07-11 23:45:57.689619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.789 [2024-07-11 23:45:57.689883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.789 [2024-07-11 23:45:57.689912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.789 [2024-07-11 23:45:57.689928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.789 [2024-07-11 23:45:57.690111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.789 [2024-07-11 23:45:57.690306] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.789 [2024-07-11 23:45:57.690331] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.789 [2024-07-11 23:45:57.690346] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.789 [2024-07-11 23:45:57.692878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.789 [2024-07-11 23:45:57.701556] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.789 [2024-07-11 23:45:57.701992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.789 [2024-07-11 23:45:57.702257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.789 [2024-07-11 23:45:57.702286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.789 [2024-07-11 23:45:57.702303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.789 [2024-07-11 23:45:57.702449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.789 [2024-07-11 23:45:57.702600] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.789 [2024-07-11 23:45:57.702623] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.789 [2024-07-11 23:45:57.702638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.789 [2024-07-11 23:45:57.704883] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.789 [2024-07-11 23:45:57.714121] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.789 [2024-07-11 23:45:57.714452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.789 [2024-07-11 23:45:57.714711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.789 [2024-07-11 23:45:57.714761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.789 [2024-07-11 23:45:57.714778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.789 [2024-07-11 23:45:57.714988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.789 [2024-07-11 23:45:57.715149] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.789 [2024-07-11 23:45:57.715173] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.789 [2024-07-11 23:45:57.715188] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.789 [2024-07-11 23:45:57.717504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.789 [2024-07-11 23:45:57.726854] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.789 [2024-07-11 23:45:57.727275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.789 [2024-07-11 23:45:57.727561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.789 [2024-07-11 23:45:57.727589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:36.789 [2024-07-11 23:45:57.727606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:36.789 [2024-07-11 23:45:57.727789] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:36.789 [2024-07-11 23:45:57.727921] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.789 [2024-07-11 23:45:57.727944] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.789 [2024-07-11 23:45:57.727959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.789 [2024-07-11 23:45:57.730478] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.050 [2024-07-11 23:45:57.739406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.050 [2024-07-11 23:45:57.739771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.050 [2024-07-11 23:45:57.740150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.740179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.051 [2024-07-11 23:45:57.740202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.051 [2024-07-11 23:45:57.740357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.051 [2024-07-11 23:45:57.740526] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.051 [2024-07-11 23:45:57.740548] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.051 [2024-07-11 23:45:57.740564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.051 [2024-07-11 23:45:57.742643] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.051 [2024-07-11 23:45:57.751984] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.051 [2024-07-11 23:45:57.752528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.752850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.752900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.051 [2024-07-11 23:45:57.752919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.051 [2024-07-11 23:45:57.753090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.051 [2024-07-11 23:45:57.753282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.051 [2024-07-11 23:45:57.753306] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.051 [2024-07-11 23:45:57.753322] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.051 [2024-07-11 23:45:57.755628] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.051 [2024-07-11 23:45:57.764481] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.051 [2024-07-11 23:45:57.764976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.765270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.765300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.051 [2024-07-11 23:45:57.765317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.051 [2024-07-11 23:45:57.765500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.051 [2024-07-11 23:45:57.765675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.051 [2024-07-11 23:45:57.765698] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.051 [2024-07-11 23:45:57.765713] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.051 [2024-07-11 23:45:57.768227] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.051 [2024-07-11 23:45:57.777118] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.051 [2024-07-11 23:45:57.777486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.777710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.777762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.051 [2024-07-11 23:45:57.777779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.051 [2024-07-11 23:45:57.777944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.051 [2024-07-11 23:45:57.778095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.051 [2024-07-11 23:45:57.778118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.051 [2024-07-11 23:45:57.778133] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.051 [2024-07-11 23:45:57.780464] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.051 [2024-07-11 23:45:57.789647] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.051 [2024-07-11 23:45:57.790097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.790302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.790331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.051 [2024-07-11 23:45:57.790348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.051 [2024-07-11 23:45:57.790513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.051 [2024-07-11 23:45:57.790682] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.051 [2024-07-11 23:45:57.790710] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.051 [2024-07-11 23:45:57.790726] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.051 [2024-07-11 23:45:57.792884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.051 [2024-07-11 23:45:57.802045] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.051 [2024-07-11 23:45:57.802396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.802691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.802742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.051 [2024-07-11 23:45:57.802758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.051 [2024-07-11 23:45:57.802923] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.051 [2024-07-11 23:45:57.803074] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.051 [2024-07-11 23:45:57.803097] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.051 [2024-07-11 23:45:57.803112] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.051 [2024-07-11 23:45:57.805329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.051 [2024-07-11 23:45:57.814675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.051 [2024-07-11 23:45:57.815042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.815245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.815274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.051 [2024-07-11 23:45:57.815292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.051 [2024-07-11 23:45:57.815439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.051 [2024-07-11 23:45:57.815607] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.051 [2024-07-11 23:45:57.815631] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.051 [2024-07-11 23:45:57.815646] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.051 [2024-07-11 23:45:57.818037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.051 [2024-07-11 23:45:57.827271] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.051 [2024-07-11 23:45:57.827720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.828016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.828069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.051 [2024-07-11 23:45:57.828086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.051 [2024-07-11 23:45:57.828306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.051 [2024-07-11 23:45:57.828494] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.051 [2024-07-11 23:45:57.828517] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.051 [2024-07-11 23:45:57.828539] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.051 [2024-07-11 23:45:57.830754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.051 [2024-07-11 23:45:57.839795] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.051 [2024-07-11 23:45:57.840196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.840397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.840425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.051 [2024-07-11 23:45:57.840442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.051 [2024-07-11 23:45:57.840571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.051 [2024-07-11 23:45:57.840722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.051 [2024-07-11 23:45:57.840745] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.051 [2024-07-11 23:45:57.840760] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.051 [2024-07-11 23:45:57.843057] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.051 [2024-07-11 23:45:57.852358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.051 [2024-07-11 23:45:57.852848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.853188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.051 [2024-07-11 23:45:57.853217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.051 [2024-07-11 23:45:57.853234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.051 [2024-07-11 23:45:57.853417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.051 [2024-07-11 23:45:57.853550] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.051 [2024-07-11 23:45:57.853572] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.051 [2024-07-11 23:45:57.853587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.051 [2024-07-11 23:45:57.856087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.051 [2024-07-11 23:45:57.864812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.051 [2024-07-11 23:45:57.865191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.865455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.865483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.052 [2024-07-11 23:45:57.865500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.052 [2024-07-11 23:45:57.865683] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.052 [2024-07-11 23:45:57.865870] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.052 [2024-07-11 23:45:57.865893] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.052 [2024-07-11 23:45:57.865909] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.052 [2024-07-11 23:45:57.868353] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.052 [2024-07-11 23:45:57.877477] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.052 [2024-07-11 23:45:57.878020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.878325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.878357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.052 [2024-07-11 23:45:57.878375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.052 [2024-07-11 23:45:57.878547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.052 [2024-07-11 23:45:57.878663] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.052 [2024-07-11 23:45:57.878687] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.052 [2024-07-11 23:45:57.878702] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.052 [2024-07-11 23:45:57.881029] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.052 [2024-07-11 23:45:57.890039] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.052 [2024-07-11 23:45:57.890457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.890715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.890774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.052 [2024-07-11 23:45:57.890791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.052 [2024-07-11 23:45:57.890992] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.052 [2024-07-11 23:45:57.891157] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.052 [2024-07-11 23:45:57.891181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.052 [2024-07-11 23:45:57.891197] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.052 [2024-07-11 23:45:57.893550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.052 [2024-07-11 23:45:57.902759] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.052 [2024-07-11 23:45:57.903194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.903501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.903549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.052 [2024-07-11 23:45:57.903566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.052 [2024-07-11 23:45:57.903731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.052 [2024-07-11 23:45:57.903899] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.052 [2024-07-11 23:45:57.903922] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.052 [2024-07-11 23:45:57.903937] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.052 [2024-07-11 23:45:57.906303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.052 [2024-07-11 23:45:57.915270] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.052 [2024-07-11 23:45:57.915871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.916254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.916285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.052 [2024-07-11 23:45:57.916303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.052 [2024-07-11 23:45:57.916492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.052 [2024-07-11 23:45:57.916699] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.052 [2024-07-11 23:45:57.916723] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.052 [2024-07-11 23:45:57.916738] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.052 [2024-07-11 23:45:57.919023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.052 [2024-07-11 23:45:57.927860] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.052 [2024-07-11 23:45:57.928328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.928644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.928696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.052 [2024-07-11 23:45:57.928713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.052 [2024-07-11 23:45:57.928860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.052 [2024-07-11 23:45:57.929011] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.052 [2024-07-11 23:45:57.929034] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.052 [2024-07-11 23:45:57.929049] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.052 [2024-07-11 23:45:57.931356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.052 [2024-07-11 23:45:57.940326] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.052 [2024-07-11 23:45:57.940733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.941062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.941118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.052 [2024-07-11 23:45:57.941135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.052 [2024-07-11 23:45:57.941238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.052 [2024-07-11 23:45:57.941387] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.052 [2024-07-11 23:45:57.941410] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.052 [2024-07-11 23:45:57.941425] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.052 [2024-07-11 23:45:57.943707] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.052 [2024-07-11 23:45:57.952782] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.052 [2024-07-11 23:45:57.953320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.953684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.953715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.052 [2024-07-11 23:45:57.953734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.052 [2024-07-11 23:45:57.953923] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.052 [2024-07-11 23:45:57.954129] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.052 [2024-07-11 23:45:57.954166] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.052 [2024-07-11 23:45:57.954183] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.052 [2024-07-11 23:45:57.956466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.052 [2024-07-11 23:45:57.965275] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.052 [2024-07-11 23:45:57.965831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.966095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.966126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.052 [2024-07-11 23:45:57.966156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.052 [2024-07-11 23:45:57.966294] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.052 [2024-07-11 23:45:57.966428] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.052 [2024-07-11 23:45:57.966452] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.052 [2024-07-11 23:45:57.966467] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.052 [2024-07-11 23:45:57.968774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.052 [2024-07-11 23:45:57.978003] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.052 [2024-07-11 23:45:57.978487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.978769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.052 [2024-07-11 23:45:57.978820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.052 [2024-07-11 23:45:57.978837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.052 [2024-07-11 23:45:57.979002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.052 [2024-07-11 23:45:57.979135] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.052 [2024-07-11 23:45:57.979171] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.052 [2024-07-11 23:45:57.979186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.052 [2024-07-11 23:45:57.981599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.052 [2024-07-11 23:45:57.990451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.052 [2024-07-11 23:45:57.990887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.053 [2024-07-11 23:45:57.991212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.053 [2024-07-11 23:45:57.991248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.053 [2024-07-11 23:45:57.991267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.053 [2024-07-11 23:45:57.991450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.053 [2024-07-11 23:45:57.991583] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.053 [2024-07-11 23:45:57.991606] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.053 [2024-07-11 23:45:57.991622] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.053 [2024-07-11 23:45:57.993724] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.313 [2024-07-11 23:45:58.003050] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.313 [2024-07-11 23:45:58.003494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.313 [2024-07-11 23:45:58.003844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.313 [2024-07-11 23:45:58.003893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.313 [2024-07-11 23:45:58.003910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.313 [2024-07-11 23:45:58.004057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.313 [2024-07-11 23:45:58.004256] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.313 [2024-07-11 23:45:58.004280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.313 [2024-07-11 23:45:58.004295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.313 [2024-07-11 23:45:58.006612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.313 [2024-07-11 23:45:58.015511] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.313 [2024-07-11 23:45:58.015914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.313 [2024-07-11 23:45:58.016103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.313 [2024-07-11 23:45:58.016131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.313 [2024-07-11 23:45:58.016161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.313 [2024-07-11 23:45:58.016309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.313 [2024-07-11 23:45:58.016442] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.313 [2024-07-11 23:45:58.016465] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.313 [2024-07-11 23:45:58.016480] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.313 [2024-07-11 23:45:58.018743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.313 [2024-07-11 23:45:58.027999] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.313 [2024-07-11 23:45:58.028397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.313 [2024-07-11 23:45:58.028705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.313 [2024-07-11 23:45:58.028734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.313 [2024-07-11 23:45:58.028758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.313 [2024-07-11 23:45:58.028924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.313 [2024-07-11 23:45:58.029056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.313 [2024-07-11 23:45:58.029079] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.313 [2024-07-11 23:45:58.029094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.313 [2024-07-11 23:45:58.031293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.313 [2024-07-11 23:45:58.040765] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.313 [2024-07-11 23:45:58.041212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.313 [2024-07-11 23:45:58.041606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.313 [2024-07-11 23:45:58.041650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.313 [2024-07-11 23:45:58.041670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.313 [2024-07-11 23:45:58.041858] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.313 [2024-07-11 23:45:58.042028] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.313 [2024-07-11 23:45:58.042051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.313 [2024-07-11 23:45:58.042067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.313 [2024-07-11 23:45:58.044543] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.313 [2024-07-11 23:45:58.053184] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.313 [2024-07-11 23:45:58.053756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.313 [2024-07-11 23:45:58.054182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.313 [2024-07-11 23:45:58.054214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.313 [2024-07-11 23:45:58.054233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.313 [2024-07-11 23:45:58.054458] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.313 [2024-07-11 23:45:58.054611] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.314 [2024-07-11 23:45:58.054635] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.314 [2024-07-11 23:45:58.054650] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.314 [2024-07-11 23:45:58.057043] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.314 [2024-07-11 23:45:58.065819] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.314 [2024-07-11 23:45:58.066336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.066644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.066675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.314 [2024-07-11 23:45:58.066694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.314 [2024-07-11 23:45:58.066872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.314 [2024-07-11 23:45:58.067025] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.314 [2024-07-11 23:45:58.067048] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.314 [2024-07-11 23:45:58.067063] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.314 [2024-07-11 23:45:58.069538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.314 [2024-07-11 23:45:58.078336] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.314 [2024-07-11 23:45:58.078753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.079089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.079150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.314 [2024-07-11 23:45:58.079171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.314 [2024-07-11 23:45:58.079281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.314 [2024-07-11 23:45:58.079469] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.314 [2024-07-11 23:45:58.079492] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.314 [2024-07-11 23:45:58.079507] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.314 [2024-07-11 23:45:58.081774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.314 [2024-07-11 23:45:58.090911] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.314 [2024-07-11 23:45:58.091473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.091819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.091869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.314 [2024-07-11 23:45:58.091888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.314 [2024-07-11 23:45:58.092059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.314 [2024-07-11 23:45:58.092189] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.314 [2024-07-11 23:45:58.092214] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.314 [2024-07-11 23:45:58.092229] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.314 [2024-07-11 23:45:58.094438] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.314 [2024-07-11 23:45:58.103456] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.314 [2024-07-11 23:45:58.103867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.104137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.104196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.314 [2024-07-11 23:45:58.104214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.314 [2024-07-11 23:45:58.104397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.314 [2024-07-11 23:45:58.104554] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.314 [2024-07-11 23:45:58.104577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.314 [2024-07-11 23:45:58.104593] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.314 [2024-07-11 23:45:58.106930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.314 [2024-07-11 23:45:58.115775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.314 [2024-07-11 23:45:58.116352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.116764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.116808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.314 [2024-07-11 23:45:58.116827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.314 [2024-07-11 23:45:58.116962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.314 [2024-07-11 23:45:58.117114] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.314 [2024-07-11 23:45:58.117137] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.314 [2024-07-11 23:45:58.117168] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.314 [2024-07-11 23:45:58.119469] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.314 [2024-07-11 23:45:58.128424] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.314 [2024-07-11 23:45:58.128997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.129262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.129294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.314 [2024-07-11 23:45:58.129313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.314 [2024-07-11 23:45:58.129502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.314 [2024-07-11 23:45:58.129673] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.314 [2024-07-11 23:45:58.129697] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.314 [2024-07-11 23:45:58.129712] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.314 [2024-07-11 23:45:58.132167] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.314 [2024-07-11 23:45:58.141033] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.314 [2024-07-11 23:45:58.141658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.141942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.141993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.314 [2024-07-11 23:45:58.142011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.314 [2024-07-11 23:45:58.142128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.314 [2024-07-11 23:45:58.142316] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.314 [2024-07-11 23:45:58.142346] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.314 [2024-07-11 23:45:58.142363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.314 [2024-07-11 23:45:58.144810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.314 [2024-07-11 23:45:58.153737] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.314 [2024-07-11 23:45:58.154179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.154428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.154456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.314 [2024-07-11 23:45:58.154473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.314 [2024-07-11 23:45:58.154674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.314 [2024-07-11 23:45:58.154825] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.314 [2024-07-11 23:45:58.154848] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.314 [2024-07-11 23:45:58.154863] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.314 [2024-07-11 23:45:58.157303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.314 [2024-07-11 23:45:58.166380] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.314 [2024-07-11 23:45:58.166926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.167247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.167278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.314 [2024-07-11 23:45:58.167297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.314 [2024-07-11 23:45:58.167486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.314 [2024-07-11 23:45:58.167674] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.314 [2024-07-11 23:45:58.167698] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.314 [2024-07-11 23:45:58.167713] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.314 [2024-07-11 23:45:58.170000] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.314 [2024-07-11 23:45:58.178963] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.314 [2024-07-11 23:45:58.179534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.179888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.314 [2024-07-11 23:45:58.179939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.314 [2024-07-11 23:45:58.179957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.314 [2024-07-11 23:45:58.180181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.314 [2024-07-11 23:45:58.180334] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.315 [2024-07-11 23:45:58.180358] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.315 [2024-07-11 23:45:58.180380] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.315 [2024-07-11 23:45:58.182625] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.315 [2024-07-11 23:45:58.191493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.315 [2024-07-11 23:45:58.191920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.315 [2024-07-11 23:45:58.192149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.315 [2024-07-11 23:45:58.192178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.315 [2024-07-11 23:45:58.192196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.315 [2024-07-11 23:45:58.192361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.315 [2024-07-11 23:45:58.192547] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.315 [2024-07-11 23:45:58.192571] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.315 [2024-07-11 23:45:58.192586] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.315 [2024-07-11 23:45:58.194813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.315 [2024-07-11 23:45:58.203971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.315 [2024-07-11 23:45:58.204439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.315 [2024-07-11 23:45:58.204695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.315 [2024-07-11 23:45:58.204723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.315 [2024-07-11 23:45:58.204740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.315 [2024-07-11 23:45:58.204941] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.315 [2024-07-11 23:45:58.205091] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.315 [2024-07-11 23:45:58.205114] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.315 [2024-07-11 23:45:58.205129] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.315 [2024-07-11 23:45:58.207440] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.315 [2024-07-11 23:45:58.216579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.315 [2024-07-11 23:45:58.216960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.315 [2024-07-11 23:45:58.217164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.315 [2024-07-11 23:45:58.217194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.315 [2024-07-11 23:45:58.217211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.315 [2024-07-11 23:45:58.217358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.315 [2024-07-11 23:45:58.217581] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.315 [2024-07-11 23:45:58.217605] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.315 [2024-07-11 23:45:58.217620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.315 [2024-07-11 23:45:58.219999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.315 [2024-07-11 23:45:58.229345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.315 [2024-07-11 23:45:58.229724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.315 [2024-07-11 23:45:58.229950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.315 [2024-07-11 23:45:58.230000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.315 [2024-07-11 23:45:58.230018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.315 [2024-07-11 23:45:58.230212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.315 [2024-07-11 23:45:58.230418] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.315 [2024-07-11 23:45:58.230441] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.315 [2024-07-11 23:45:58.230457] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.315 [2024-07-11 23:45:58.232754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.315 [2024-07-11 23:45:58.241930] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.315 [2024-07-11 23:45:58.242300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.315 [2024-07-11 23:45:58.242473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.315 [2024-07-11 23:45:58.242501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.315 [2024-07-11 23:45:58.242518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.315 [2024-07-11 23:45:58.242701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.315 [2024-07-11 23:45:58.242833] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.315 [2024-07-11 23:45:58.242857] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.315 [2024-07-11 23:45:58.242872] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.315 [2024-07-11 23:45:58.245231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.315 [2024-07-11 23:45:58.254486] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.315 [2024-07-11 23:45:58.254850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.315 [2024-07-11 23:45:58.255070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.315 [2024-07-11 23:45:58.255121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.315 [2024-07-11 23:45:58.255148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.315 [2024-07-11 23:45:58.255334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.315 [2024-07-11 23:45:58.255520] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.315 [2024-07-11 23:45:58.255543] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.315 [2024-07-11 23:45:58.255559] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.315 [2024-07-11 23:45:58.257751] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.575 [2024-07-11 23:45:58.267124] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.575 [2024-07-11 23:45:58.267520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.575 [2024-07-11 23:45:58.267747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.575 [2024-07-11 23:45:58.267798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.575 [2024-07-11 23:45:58.267816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.575 [2024-07-11 23:45:58.267999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.575 [2024-07-11 23:45:58.268218] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.575 [2024-07-11 23:45:58.268242] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.575 [2024-07-11 23:45:58.268257] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.575 [2024-07-11 23:45:58.270665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.575 [2024-07-11 23:45:58.279598] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.575 [2024-07-11 23:45:58.280014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.575 [2024-07-11 23:45:58.280219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.575 [2024-07-11 23:45:58.280248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.575 [2024-07-11 23:45:58.280266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.575 [2024-07-11 23:45:58.280430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.575 [2024-07-11 23:45:58.280599] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.575 [2024-07-11 23:45:58.280622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.576 [2024-07-11 23:45:58.280637] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.576 [2024-07-11 23:45:58.283044] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.576 [2024-07-11 23:45:58.292208] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.576 [2024-07-11 23:45:58.292545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.292744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.292772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.576 [2024-07-11 23:45:58.292789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.576 [2024-07-11 23:45:58.292990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.576 [2024-07-11 23:45:58.293208] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.576 [2024-07-11 23:45:58.293232] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.576 [2024-07-11 23:45:58.293247] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.576 [2024-07-11 23:45:58.295656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.576 [2024-07-11 23:45:58.304774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.576 [2024-07-11 23:45:58.305245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.305478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.305528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.576 [2024-07-11 23:45:58.305545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.576 [2024-07-11 23:45:58.305710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.576 [2024-07-11 23:45:58.305878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.576 [2024-07-11 23:45:58.305901] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.576 [2024-07-11 23:45:58.305916] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.576 [2024-07-11 23:45:58.308259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.576 [2024-07-11 23:45:58.317354] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.576 [2024-07-11 23:45:58.317763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.317971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.318019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.576 [2024-07-11 23:45:58.318036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.576 [2024-07-11 23:45:58.318228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.576 [2024-07-11 23:45:58.318398] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.576 [2024-07-11 23:45:58.318421] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.576 [2024-07-11 23:45:58.318436] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.576 [2024-07-11 23:45:58.320927] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.576 [2024-07-11 23:45:58.329823] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.576 [2024-07-11 23:45:58.330227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.330477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.330526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.576 [2024-07-11 23:45:58.330543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.576 [2024-07-11 23:45:58.330725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.576 [2024-07-11 23:45:58.330875] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.576 [2024-07-11 23:45:58.330898] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.576 [2024-07-11 23:45:58.330914] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.576 [2024-07-11 23:45:58.333314] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.576 [2024-07-11 23:45:58.342304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.576 [2024-07-11 23:45:58.342673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.342877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.342932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.576 [2024-07-11 23:45:58.342951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.576 [2024-07-11 23:45:58.343115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.576 [2024-07-11 23:45:58.343276] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.576 [2024-07-11 23:45:58.343300] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.576 [2024-07-11 23:45:58.343315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.576 [2024-07-11 23:45:58.345521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.576 [2024-07-11 23:45:58.354865] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.576 [2024-07-11 23:45:58.355252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.355448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.355499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.576 [2024-07-11 23:45:58.355516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.576 [2024-07-11 23:45:58.355681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.576 [2024-07-11 23:45:58.355867] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.576 [2024-07-11 23:45:58.355890] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.576 [2024-07-11 23:45:58.355905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.576 [2024-07-11 23:45:58.358304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.576 [2024-07-11 23:45:58.367346] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.576 [2024-07-11 23:45:58.367713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.367931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.367983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.576 [2024-07-11 23:45:58.368001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.576 [2024-07-11 23:45:58.368176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.576 [2024-07-11 23:45:58.368346] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.576 [2024-07-11 23:45:58.368369] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.576 [2024-07-11 23:45:58.368384] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.576 [2024-07-11 23:45:58.370736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.576 [2024-07-11 23:45:58.379916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.576 [2024-07-11 23:45:58.380299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.380512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.380563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.576 [2024-07-11 23:45:58.380586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.576 [2024-07-11 23:45:58.380788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.576 [2024-07-11 23:45:58.380956] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.576 [2024-07-11 23:45:58.380980] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.576 [2024-07-11 23:45:58.380995] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.576 [2024-07-11 23:45:58.383230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.576 [2024-07-11 23:45:58.392527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.576 [2024-07-11 23:45:58.392911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.393157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.393187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.576 [2024-07-11 23:45:58.393204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.576 [2024-07-11 23:45:58.393405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.576 [2024-07-11 23:45:58.393591] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.576 [2024-07-11 23:45:58.393614] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.576 [2024-07-11 23:45:58.393629] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.576 [2024-07-11 23:45:58.396125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.576 [2024-07-11 23:45:58.405241] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.576 [2024-07-11 23:45:58.405619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.405855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.576 [2024-07-11 23:45:58.405902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.576 [2024-07-11 23:45:58.405920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.576 [2024-07-11 23:45:58.406102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.577 [2024-07-11 23:45:58.406323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.577 [2024-07-11 23:45:58.406348] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.577 [2024-07-11 23:45:58.406363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.577 [2024-07-11 23:45:58.408699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.577 [2024-07-11 23:45:58.417670] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.577 [2024-07-11 23:45:58.418050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.418277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.418307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.577 [2024-07-11 23:45:58.418324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.577 [2024-07-11 23:45:58.418513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.577 [2024-07-11 23:45:58.418682] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.577 [2024-07-11 23:45:58.418705] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.577 [2024-07-11 23:45:58.418720] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.577 [2024-07-11 23:45:58.420946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 386996 Killed "${NVMF_APP[@]}" "$@" 00:32:37.577 23:45:58 -- host/bdevperf.sh@36 -- # tgt_init 00:32:37.577 23:45:58 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:37.577 23:45:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:37.577 23:45:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:37.577 23:45:58 -- common/autotest_common.sh@10 -- # set +x 00:32:37.577 [2024-07-11 23:45:58.430371] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.577 [2024-07-11 23:45:58.430750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.430951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.430980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.577 [2024-07-11 23:45:58.430997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.577 [2024-07-11 23:45:58.431125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.577 [2024-07-11 23:45:58.431286] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.577 [2024-07-11 23:45:58.431309] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.577 [2024-07-11 23:45:58.431324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.577 [2024-07-11 23:45:58.433713] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.577 23:45:58 -- nvmf/common.sh@469 -- # nvmfpid=387980 00:32:37.577 23:45:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:37.577 23:45:58 -- nvmf/common.sh@470 -- # waitforlisten 387980 00:32:37.577 23:45:58 -- common/autotest_common.sh@819 -- # '[' -z 387980 ']' 00:32:37.577 23:45:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.577 23:45:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:37.577 23:45:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.577 23:45:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:37.577 23:45:58 -- common/autotest_common.sh@10 -- # set +x 00:32:37.577 [2024-07-11 23:45:58.443034] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.577 [2024-07-11 23:45:58.443390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.443604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.443632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.577 [2024-07-11 23:45:58.443650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.577 [2024-07-11 23:45:58.443814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.577 [2024-07-11 23:45:58.444007] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.577 [2024-07-11 23:45:58.444030] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.577 [2024-07-11 23:45:58.444046] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.577 [2024-07-11 23:45:58.446390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.577 [2024-07-11 23:45:58.455871] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.577 [2024-07-11 23:45:58.456264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.456466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.456518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.577 [2024-07-11 23:45:58.456535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.577 [2024-07-11 23:45:58.456718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.577 [2024-07-11 23:45:58.456923] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.577 [2024-07-11 23:45:58.456947] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.577 [2024-07-11 23:45:58.456962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.577 [2024-07-11 23:45:58.459520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.577 [2024-07-11 23:45:58.468576] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.577 [2024-07-11 23:45:58.468946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.469168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.469197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.577 [2024-07-11 23:45:58.469215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.577 [2024-07-11 23:45:58.469326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.577 [2024-07-11 23:45:58.469477] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.577 [2024-07-11 23:45:58.469499] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.577 [2024-07-11 23:45:58.469514] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.577 [2024-07-11 23:45:58.471849] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.577 [2024-07-11 23:45:58.479704] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:37.577 [2024-07-11 23:45:58.479780] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:37.577 [2024-07-11 23:45:58.481010] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.577 [2024-07-11 23:45:58.481408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.481644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.481694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.577 [2024-07-11 23:45:58.481712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.577 [2024-07-11 23:45:58.481866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.577 [2024-07-11 23:45:58.482035] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.577 [2024-07-11 23:45:58.482059] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.577 [2024-07-11 23:45:58.482075] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.577 [2024-07-11 23:45:58.484415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.577 [2024-07-11 23:45:58.493687] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.577 [2024-07-11 23:45:58.494054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.494224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.494253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.577 [2024-07-11 23:45:58.494270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.577 [2024-07-11 23:45:58.494418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.577 [2024-07-11 23:45:58.494587] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.577 [2024-07-11 23:45:58.494611] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.577 [2024-07-11 23:45:58.494627] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.577 [2024-07-11 23:45:58.497013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.577 [2024-07-11 23:45:58.506368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.577 [2024-07-11 23:45:58.506745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.506996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.507048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.577 [2024-07-11 23:45:58.507065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.577 [2024-07-11 23:45:58.507256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.577 [2024-07-11 23:45:58.507463] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.577 [2024-07-11 23:45:58.507487] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.577 [2024-07-11 23:45:58.507502] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.577 [2024-07-11 23:45:58.509745] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.577 [2024-07-11 23:45:58.519045] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.577 [2024-07-11 23:45:58.519444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.577 [2024-07-11 23:45:58.519719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.578 [2024-07-11 23:45:58.519775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.578 [2024-07-11 23:45:58.519794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.578 [2024-07-11 23:45:58.519940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.578 [2024-07-11 23:45:58.520134] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.578 [2024-07-11 23:45:58.520168] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.578 [2024-07-11 23:45:58.520184] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.578 [2024-07-11 23:45:58.522570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.578 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.837 [2024-07-11 23:45:58.531507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.837 [2024-07-11 23:45:58.531899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.837 [2024-07-11 23:45:58.532086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.837 [2024-07-11 23:45:58.532115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.837 [2024-07-11 23:45:58.532133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.837 [2024-07-11 23:45:58.532291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.837 [2024-07-11 23:45:58.532461] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.837 [2024-07-11 23:45:58.532486] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.837 [2024-07-11 23:45:58.532502] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.837 [2024-07-11 23:45:58.534887] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.837 [2024-07-11 23:45:58.543939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.837 [2024-07-11 23:45:58.544262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.837 [2024-07-11 23:45:58.544461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.837 [2024-07-11 23:45:58.544507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.837 [2024-07-11 23:45:58.544525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.837 [2024-07-11 23:45:58.544672] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.837 [2024-07-11 23:45:58.544824] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.837 [2024-07-11 23:45:58.544847] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.837 [2024-07-11 23:45:58.544862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.837 [2024-07-11 23:45:58.547220] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.837 [2024-07-11 23:45:58.556506] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.837 [2024-07-11 23:45:58.556853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.837 [2024-07-11 23:45:58.557057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.837 [2024-07-11 23:45:58.557086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.837 [2024-07-11 23:45:58.557104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.837 [2024-07-11 23:45:58.557259] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.837 [2024-07-11 23:45:58.557418] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.837 [2024-07-11 23:45:58.557443] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.837 [2024-07-11 23:45:58.557458] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.837 [2024-07-11 23:45:58.559065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:37.837 [2024-07-11 23:45:58.559861] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.837 [2024-07-11 23:45:58.569179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.837 [2024-07-11 23:45:58.569645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.837 [2024-07-11 23:45:58.569903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.837 [2024-07-11 23:45:58.569951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.837 [2024-07-11 23:45:58.569973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.837 [2024-07-11 23:45:58.570162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.837 [2024-07-11 23:45:58.570338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.837 [2024-07-11 23:45:58.570363] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.837 [2024-07-11 23:45:58.570384] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.837 [2024-07-11 23:45:58.572922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.837 [2024-07-11 23:45:58.581915] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.837 [2024-07-11 23:45:58.582338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.837 [2024-07-11 23:45:58.582516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.837 [2024-07-11 23:45:58.582563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.837 [2024-07-11 23:45:58.582584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.837 [2024-07-11 23:45:58.582753] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.837 [2024-07-11 23:45:58.582889] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.837 [2024-07-11 23:45:58.582914] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.837 [2024-07-11 23:45:58.582932] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.837 [2024-07-11 23:45:58.585207] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.837 [2024-07-11 23:45:58.594250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.837 [2024-07-11 23:45:58.594632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.837 [2024-07-11 23:45:58.594868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.837 [2024-07-11 23:45:58.594898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.837 [2024-07-11 23:45:58.594916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.837 [2024-07-11 23:45:58.595118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.837 [2024-07-11 23:45:58.595296] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.837 [2024-07-11 23:45:58.595330] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.837 [2024-07-11 23:45:58.595348] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.837 [2024-07-11 23:45:58.597686] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.837 [2024-07-11 23:45:58.606810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.837 [2024-07-11 23:45:58.607191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.837 [2024-07-11 23:45:58.607361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.837 [2024-07-11 23:45:58.607391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.837 [2024-07-11 23:45:58.607410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.838 [2024-07-11 23:45:58.607594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.838 [2024-07-11 23:45:58.607765] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.838 [2024-07-11 23:45:58.607790] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.838 [2024-07-11 23:45:58.607807] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.838 [2024-07-11 23:45:58.610308] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.838 [2024-07-11 23:45:58.619290] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.838 [2024-07-11 23:45:58.619740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.838 [2024-07-11 23:45:58.619930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.838 [2024-07-11 23:45:58.619978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.838 [2024-07-11 23:45:58.620000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.838 [2024-07-11 23:45:58.620226] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.838 [2024-07-11 23:45:58.620385] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.838 [2024-07-11 23:45:58.620410] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.838 [2024-07-11 23:45:58.620430] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.838 [2024-07-11 23:45:58.622603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.838 [2024-07-11 23:45:58.631850] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.838 [2024-07-11 23:45:58.632204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.838 [2024-07-11 23:45:58.632430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.838 [2024-07-11 23:45:58.632480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.838 [2024-07-11 23:45:58.632499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.838 [2024-07-11 23:45:58.632629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.838 [2024-07-11 23:45:58.632798] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.838 [2024-07-11 23:45:58.632822] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.838 [2024-07-11 23:45:58.632850] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.838 [2024-07-11 23:45:58.634930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.838 [2024-07-11 23:45:58.644555] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.838 [2024-07-11 23:45:58.644954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.838 [2024-07-11 23:45:58.645198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.838 [2024-07-11 23:45:58.645228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.838 [2024-07-11 23:45:58.645247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.838 [2024-07-11 23:45:58.645363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.838 [2024-07-11 23:45:58.645568] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.838 [2024-07-11 23:45:58.645592] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.838 [2024-07-11 23:45:58.645609] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.838 [2024-07-11 23:45:58.647997] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.838 [2024-07-11 23:45:58.652364] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:37.838 [2024-07-11 23:45:58.652499] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:37.838 [2024-07-11 23:45:58.652519] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:37.838 [2024-07-11 23:45:58.652533] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:37.838 [2024-07-11 23:45:58.652601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:37.838 [2024-07-11 23:45:58.652657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:37.838 [2024-07-11 23:45:58.652660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.838 [2024-07-11 23:45:58.657147] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.838 [2024-07-11 23:45:58.657592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.838 [2024-07-11 23:45:58.657766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.838 [2024-07-11 23:45:58.657794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.838 [2024-07-11 23:45:58.657815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.838 [2024-07-11 23:45:58.658020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.838 [2024-07-11 23:45:58.658202] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.838 [2024-07-11 23:45:58.658228] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.838 [2024-07-11 23:45:58.658245] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.838 [2024-07-11 23:45:58.660510] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.838 [2024-07-11 23:45:58.669804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.838 [2024-07-11 23:45:58.670237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.838 [2024-07-11 23:45:58.670465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.838 [2024-07-11 23:45:58.670495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.838 [2024-07-11 23:45:58.670526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.838 [2024-07-11 23:45:58.670744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.838 [2024-07-11 23:45:58.670939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.838 [2024-07-11 23:45:58.670964] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.838 [2024-07-11 23:45:58.670984] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.838 [2024-07-11 23:45:58.673476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.838 [2024-07-11 23:45:58.682392] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.838 [2024-07-11 23:45:58.682851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.838 [2024-07-11 23:45:58.683031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.838 [2024-07-11 23:45:58.683061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.838 [2024-07-11 23:45:58.683082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.839 [2024-07-11 23:45:58.683272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.839 [2024-07-11 23:45:58.683410] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.839 [2024-07-11 23:45:58.683435] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.839 [2024-07-11 23:45:58.683454] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.839 [2024-07-11 23:45:58.685541] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.839 [2024-07-11 23:45:58.695068] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.839 [2024-07-11 23:45:58.695531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.839 [2024-07-11 23:45:58.695733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.839 [2024-07-11 23:45:58.695762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.839 [2024-07-11 23:45:58.695785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.839 [2024-07-11 23:45:58.695964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.839 [2024-07-11 23:45:58.696149] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.839 [2024-07-11 23:45:58.696174] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.839 [2024-07-11 23:45:58.696194] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.839 [2024-07-11 23:45:58.698567] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.839 [2024-07-11 23:45:58.707761] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.839 [2024-07-11 23:45:58.708177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.839 [2024-07-11 23:45:58.708379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.839 [2024-07-11 23:45:58.708408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.839 [2024-07-11 23:45:58.708428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.839 [2024-07-11 23:45:58.708616] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.839 [2024-07-11 23:45:58.708771] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.839 [2024-07-11 23:45:58.708796] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.839 [2024-07-11 23:45:58.708815] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.839 [2024-07-11 23:45:58.711197] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.839 [2024-07-11 23:45:58.720185] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.839 [2024-07-11 23:45:58.720669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.839 [2024-07-11 23:45:58.720870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.839 [2024-07-11 23:45:58.720899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.839 [2024-07-11 23:45:58.720921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.839 [2024-07-11 23:45:58.721117] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.839 [2024-07-11 23:45:58.721321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.839 [2024-07-11 23:45:58.721346] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.839 [2024-07-11 23:45:58.721366] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.839 [2024-07-11 23:45:58.723762] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.839 [2024-07-11 23:45:58.732778] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.839 [2024-07-11 23:45:58.733240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.839 [2024-07-11 23:45:58.733419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.839 [2024-07-11 23:45:58.733448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.839 [2024-07-11 23:45:58.733469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.839 [2024-07-11 23:45:58.733647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.839 [2024-07-11 23:45:58.733819] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.839 [2024-07-11 23:45:58.733844] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.839 [2024-07-11 23:45:58.733862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.839 [2024-07-11 23:45:58.736279] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.839 [2024-07-11 23:45:58.745378] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.839 [2024-07-11 23:45:58.745767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.839 [2024-07-11 23:45:58.745930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.839 [2024-07-11 23:45:58.745958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.839 [2024-07-11 23:45:58.745976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.839 [2024-07-11 23:45:58.746179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.839 [2024-07-11 23:45:58.746312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.839 [2024-07-11 23:45:58.746336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.839 [2024-07-11 23:45:58.746351] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.839 [2024-07-11 23:45:58.748668] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.839 [2024-07-11 23:45:58.758081] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.839 [2024-07-11 23:45:58.758470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.839 [2024-07-11 23:45:58.758666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.839 [2024-07-11 23:45:58.758695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.839 [2024-07-11 23:45:58.758712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.839 [2024-07-11 23:45:58.758895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.839 [2024-07-11 23:45:58.759046] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.839 [2024-07-11 23:45:58.759070] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.839 [2024-07-11 23:45:58.759085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.840 [2024-07-11 23:45:58.761446] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.840 [2024-07-11 23:45:58.770358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.840 [2024-07-11 23:45:58.770701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.840 [2024-07-11 23:45:58.770923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.840 [2024-07-11 23:45:58.770952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.840 [2024-07-11 23:45:58.770969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.840 [2024-07-11 23:45:58.771160] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.840 [2024-07-11 23:45:58.771329] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.840 [2024-07-11 23:45:58.771353] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.840 [2024-07-11 23:45:58.771368] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:37.840 [2024-07-11 23:45:58.773719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.840 [2024-07-11 23:45:58.782955] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:37.840 [2024-07-11 23:45:58.783303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.840 [2024-07-11 23:45:58.783489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.840 [2024-07-11 23:45:58.783518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:37.840 [2024-07-11 23:45:58.783535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:37.840 [2024-07-11 23:45:58.783701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:37.840 [2024-07-11 23:45:58.783876] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:37.840 [2024-07-11 23:45:58.783900] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:37.840 [2024-07-11 23:45:58.783915] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.099 [2024-07-11 23:45:58.786361] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.099 [2024-07-11 23:45:58.795377] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.099 [2024-07-11 23:45:58.795722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.795937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.795965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.099 [2024-07-11 23:45:58.795982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.099 [2024-07-11 23:45:58.796176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.099 [2024-07-11 23:45:58.796381] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.099 [2024-07-11 23:45:58.796405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.099 [2024-07-11 23:45:58.796420] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.099 [2024-07-11 23:45:58.798824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.099 [2024-07-11 23:45:58.807762] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.099 [2024-07-11 23:45:58.808094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.808267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.808296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.099 [2024-07-11 23:45:58.808314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.099 [2024-07-11 23:45:58.808496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.099 [2024-07-11 23:45:58.808664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.099 [2024-07-11 23:45:58.808687] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.099 [2024-07-11 23:45:58.808703] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.099 [2024-07-11 23:45:58.811037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.099 [2024-07-11 23:45:58.820248] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.099 [2024-07-11 23:45:58.820624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.820806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.820834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.099 [2024-07-11 23:45:58.820851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.099 [2024-07-11 23:45:58.821034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.099 [2024-07-11 23:45:58.821247] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.099 [2024-07-11 23:45:58.821276] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.099 [2024-07-11 23:45:58.821293] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.099 [2024-07-11 23:45:58.823501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.099 [2024-07-11 23:45:58.832790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.099 [2024-07-11 23:45:58.833160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.833326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.833354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.099 [2024-07-11 23:45:58.833371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.099 [2024-07-11 23:45:58.833518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.099 [2024-07-11 23:45:58.833650] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.099 [2024-07-11 23:45:58.833673] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.099 [2024-07-11 23:45:58.833688] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.099 [2024-07-11 23:45:58.835953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.099 [2024-07-11 23:45:58.845430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.099 [2024-07-11 23:45:58.845807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.846023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.846052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.099 [2024-07-11 23:45:58.846069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.099 [2024-07-11 23:45:58.846224] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.099 [2024-07-11 23:45:58.846375] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.099 [2024-07-11 23:45:58.846398] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.099 [2024-07-11 23:45:58.846413] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.099 [2024-07-11 23:45:58.848679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.099 [2024-07-11 23:45:58.857976] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.099 [2024-07-11 23:45:58.858305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.858490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.858518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.099 [2024-07-11 23:45:58.858535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.099 [2024-07-11 23:45:58.858664] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.099 [2024-07-11 23:45:58.858814] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.099 [2024-07-11 23:45:58.858837] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.099 [2024-07-11 23:45:58.858862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.099 [2024-07-11 23:45:58.861210] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.099 [2024-07-11 23:45:58.870532] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.099 [2024-07-11 23:45:58.870867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.871051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.871079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.099 [2024-07-11 23:45:58.871096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.099 [2024-07-11 23:45:58.871286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.099 [2024-07-11 23:45:58.871401] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.099 [2024-07-11 23:45:58.871424] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.099 [2024-07-11 23:45:58.871439] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.099 [2024-07-11 23:45:58.873719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.099 [2024-07-11 23:45:58.883072] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.099 [2024-07-11 23:45:58.883438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.883623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.883651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.099 [2024-07-11 23:45:58.883668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.099 [2024-07-11 23:45:58.883851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.099 [2024-07-11 23:45:58.884091] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.099 [2024-07-11 23:45:58.884114] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.099 [2024-07-11 23:45:58.884130] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.099 [2024-07-11 23:45:58.886479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.099 [2024-07-11 23:45:58.895557] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.099 [2024-07-11 23:45:58.895908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.896093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.099 [2024-07-11 23:45:58.896122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.099 [2024-07-11 23:45:58.896148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.099 [2024-07-11 23:45:58.896316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.100 [2024-07-11 23:45:58.896485] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.100 [2024-07-11 23:45:58.896509] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.100 [2024-07-11 23:45:58.896524] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.100 [2024-07-11 23:45:58.898730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.100 [2024-07-11 23:45:58.908078] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.100 [2024-07-11 23:45:58.908462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.908649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.908677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.100 [2024-07-11 23:45:58.908694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.100 [2024-07-11 23:45:58.908841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.100 [2024-07-11 23:45:58.909010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.100 [2024-07-11 23:45:58.909033] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.100 [2024-07-11 23:45:58.909048] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.100 [2024-07-11 23:45:58.911336] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.100 [2024-07-11 23:45:58.920674] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.100 [2024-07-11 23:45:58.921045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.921243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.921272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.100 [2024-07-11 23:45:58.921290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.100 [2024-07-11 23:45:58.921455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.100 [2024-07-11 23:45:58.921606] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.100 [2024-07-11 23:45:58.921629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.100 [2024-07-11 23:45:58.921644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.100 [2024-07-11 23:45:58.923906] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.100 [2024-07-11 23:45:58.933145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.100 [2024-07-11 23:45:58.933515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.933705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.933733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.100 [2024-07-11 23:45:58.933751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.100 [2024-07-11 23:45:58.933897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.100 [2024-07-11 23:45:58.934084] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.100 [2024-07-11 23:45:58.934107] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.100 [2024-07-11 23:45:58.934122] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.100 [2024-07-11 23:45:58.936300] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.100 [2024-07-11 23:45:58.945553] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.100 [2024-07-11 23:45:58.945966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.946188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.946217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.100 [2024-07-11 23:45:58.946235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.100 [2024-07-11 23:45:58.946429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.100 [2024-07-11 23:45:58.946608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.100 [2024-07-11 23:45:58.946628] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.100 [2024-07-11 23:45:58.946641] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.100 [2024-07-11 23:45:58.948838] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.100 [2024-07-11 23:45:58.957939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.100 [2024-07-11 23:45:58.958351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.958578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.958603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.100 [2024-07-11 23:45:58.958618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.100 [2024-07-11 23:45:58.958761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.100 [2024-07-11 23:45:58.958892] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.100 [2024-07-11 23:45:58.958913] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.100 [2024-07-11 23:45:58.958926] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.100 [2024-07-11 23:45:58.960992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.100 [2024-07-11 23:45:58.970157] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.100 [2024-07-11 23:45:58.970542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.970772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.970796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.100 [2024-07-11 23:45:58.970811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.100 [2024-07-11 23:45:58.970954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.100 [2024-07-11 23:45:58.971131] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.100 [2024-07-11 23:45:58.971161] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.100 [2024-07-11 23:45:58.971175] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.100 [2024-07-11 23:45:58.973240] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.100 [2024-07-11 23:45:58.982266] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.100 [2024-07-11 23:45:58.982617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.982828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.982853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.100 [2024-07-11 23:45:58.982868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.100 [2024-07-11 23:45:58.982948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.100 [2024-07-11 23:45:58.983093] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.100 [2024-07-11 23:45:58.983113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.100 [2024-07-11 23:45:58.983149] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.100 [2024-07-11 23:45:58.985146] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.100 [2024-07-11 23:45:58.994363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.100 [2024-07-11 23:45:58.994712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.994913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:58.994937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.100 [2024-07-11 23:45:58.994952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.100 [2024-07-11 23:45:58.995095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.100 [2024-07-11 23:45:58.995252] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.100 [2024-07-11 23:45:58.995273] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.100 [2024-07-11 23:45:58.995288] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.100 [2024-07-11 23:45:58.997364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.100 [2024-07-11 23:45:59.006586] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.100 [2024-07-11 23:45:59.006892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:59.007068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:59.007092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.100 [2024-07-11 23:45:59.007107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.100 [2024-07-11 23:45:59.007292] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.100 [2024-07-11 23:45:59.007428] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.100 [2024-07-11 23:45:59.007464] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.100 [2024-07-11 23:45:59.007477] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.100 [2024-07-11 23:45:59.009571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.100 [2024-07-11 23:45:59.018857] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.100 [2024-07-11 23:45:59.019240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:59.019459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.100 [2024-07-11 23:45:59.019487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.100 [2024-07-11 23:45:59.019503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.101 [2024-07-11 23:45:59.019647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.101 [2024-07-11 23:45:59.019794] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.101 [2024-07-11 23:45:59.019814] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.101 [2024-07-11 23:45:59.019827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.101 [2024-07-11 23:45:59.021962] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.101 [2024-07-11 23:45:59.031195] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.101 [2024-07-11 23:45:59.031527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.101 [2024-07-11 23:45:59.031703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.101 [2024-07-11 23:45:59.031727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.101 [2024-07-11 23:45:59.031743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.101 [2024-07-11 23:45:59.031871] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.101 [2024-07-11 23:45:59.031986] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.101 [2024-07-11 23:45:59.032006] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.101 [2024-07-11 23:45:59.032019] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.101 [2024-07-11 23:45:59.034259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.101 [2024-07-11 23:45:59.043603] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.101 [2024-07-11 23:45:59.043931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.101 [2024-07-11 23:45:59.044150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.101 [2024-07-11 23:45:59.044176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.101 [2024-07-11 23:45:59.044192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.101 [2024-07-11 23:45:59.044325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.101 [2024-07-11 23:45:59.044458] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.101 [2024-07-11 23:45:59.044479] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.101 [2024-07-11 23:45:59.044492] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.101 [2024-07-11 23:45:59.046598] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.363 [2024-07-11 23:45:59.055812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.363 [2024-07-11 23:45:59.056169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.056383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.056407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.363 [2024-07-11 23:45:59.056427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.363 [2024-07-11 23:45:59.056587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.363 [2024-07-11 23:45:59.056781] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.363 [2024-07-11 23:45:59.056801] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.363 [2024-07-11 23:45:59.056814] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.363 [2024-07-11 23:45:59.058887] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.363 [2024-07-11 23:45:59.068056] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.363 [2024-07-11 23:45:59.068467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.068618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.068642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.363 [2024-07-11 23:45:59.068657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.363 [2024-07-11 23:45:59.068801] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.363 [2024-07-11 23:45:59.068963] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.363 [2024-07-11 23:45:59.068983] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.363 [2024-07-11 23:45:59.068996] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.363 [2024-07-11 23:45:59.071057] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.363 [2024-07-11 23:45:59.080451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.363 [2024-07-11 23:45:59.080770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.080963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.080988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.363 [2024-07-11 23:45:59.081002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.363 [2024-07-11 23:45:59.081203] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.363 [2024-07-11 23:45:59.081388] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.363 [2024-07-11 23:45:59.081410] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.363 [2024-07-11 23:45:59.081424] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.363 [2024-07-11 23:45:59.083449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.363 [2024-07-11 23:45:59.092778] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.363 [2024-07-11 23:45:59.093181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.093359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.093385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.363 [2024-07-11 23:45:59.093400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.363 [2024-07-11 23:45:59.093519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.363 [2024-07-11 23:45:59.093666] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.363 [2024-07-11 23:45:59.093686] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.363 [2024-07-11 23:45:59.093699] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.363 [2024-07-11 23:45:59.095737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.363 [2024-07-11 23:45:59.105117] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.363 [2024-07-11 23:45:59.105475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.105657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.105682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.363 [2024-07-11 23:45:59.105697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.363 [2024-07-11 23:45:59.105841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.363 [2024-07-11 23:45:59.106003] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.363 [2024-07-11 23:45:59.106023] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.363 [2024-07-11 23:45:59.106036] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.363 [2024-07-11 23:45:59.108192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.363 [2024-07-11 23:45:59.117295] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.363 [2024-07-11 23:45:59.117645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.117820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.117844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.363 [2024-07-11 23:45:59.117859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.363 [2024-07-11 23:45:59.118003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.363 [2024-07-11 23:45:59.118207] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.363 [2024-07-11 23:45:59.118228] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.363 [2024-07-11 23:45:59.118242] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.363 [2024-07-11 23:45:59.120369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.363 [2024-07-11 23:45:59.129516] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.363 [2024-07-11 23:45:59.129855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.130034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.130058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.363 [2024-07-11 23:45:59.130074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.363 [2024-07-11 23:45:59.130292] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.363 [2024-07-11 23:45:59.130401] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.363 [2024-07-11 23:45:59.130422] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.363 [2024-07-11 23:45:59.130436] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.363 [2024-07-11 23:45:59.132633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.363 [2024-07-11 23:45:59.141817] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.363 [2024-07-11 23:45:59.142166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.142347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.142373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.363 [2024-07-11 23:45:59.142388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.363 [2024-07-11 23:45:59.142533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.363 [2024-07-11 23:45:59.142697] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.363 [2024-07-11 23:45:59.142719] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.363 [2024-07-11 23:45:59.142732] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.363 [2024-07-11 23:45:59.144883] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.363 [2024-07-11 23:45:59.154279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.363 [2024-07-11 23:45:59.154625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.154845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.363 [2024-07-11 23:45:59.154870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.363 [2024-07-11 23:45:59.154885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.363 [2024-07-11 23:45:59.155043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.363 [2024-07-11 23:45:59.155270] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.363 [2024-07-11 23:45:59.155292] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.364 [2024-07-11 23:45:59.155306] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.364 [2024-07-11 23:45:59.157330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.364 [2024-07-11 23:45:59.166311] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.364 [2024-07-11 23:45:59.166678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.166873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.166897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.364 [2024-07-11 23:45:59.166912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.364 [2024-07-11 23:45:59.167055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.364 [2024-07-11 23:45:59.167244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.364 [2024-07-11 23:45:59.167271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.364 [2024-07-11 23:45:59.167286] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.364 [2024-07-11 23:45:59.169387] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.364 [2024-07-11 23:45:59.178605] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.364 [2024-07-11 23:45:59.179052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.179348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.179375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.364 [2024-07-11 23:45:59.179391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.364 [2024-07-11 23:45:59.179503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.364 [2024-07-11 23:45:59.179666] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.364 [2024-07-11 23:45:59.179686] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.364 [2024-07-11 23:45:59.179699] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.364 [2024-07-11 23:45:59.181648] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.364 [2024-07-11 23:45:59.191025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.364 [2024-07-11 23:45:59.191484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.191767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.191791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.364 [2024-07-11 23:45:59.191806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.364 [2024-07-11 23:45:59.191950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.364 [2024-07-11 23:45:59.192081] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.364 [2024-07-11 23:45:59.192101] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.364 [2024-07-11 23:45:59.192129] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.364 [2024-07-11 23:45:59.194260] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.364 [2024-07-11 23:45:59.203348] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.364 [2024-07-11 23:45:59.203729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.203968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.203993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.364 [2024-07-11 23:45:59.204008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.364 [2024-07-11 23:45:59.204161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.364 [2024-07-11 23:45:59.204314] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.364 [2024-07-11 23:45:59.204335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.364 [2024-07-11 23:45:59.204354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.364 [2024-07-11 23:45:59.206259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.364 [2024-07-11 23:45:59.215631] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.364 [2024-07-11 23:45:59.216118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.216419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.216459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.364 [2024-07-11 23:45:59.216475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.364 [2024-07-11 23:45:59.216622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.364 [2024-07-11 23:45:59.216790] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.364 [2024-07-11 23:45:59.216811] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.364 [2024-07-11 23:45:59.216825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.364 [2024-07-11 23:45:59.218976] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.364 [2024-07-11 23:45:59.228014] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.364 [2024-07-11 23:45:59.228427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.228633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.228657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.364 [2024-07-11 23:45:59.228672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.364 [2024-07-11 23:45:59.228815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.364 [2024-07-11 23:45:59.228930] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.364 [2024-07-11 23:45:59.228950] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.364 [2024-07-11 23:45:59.228964] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.364 [2024-07-11 23:45:59.231254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.364 [2024-07-11 23:45:59.240291] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.364 [2024-07-11 23:45:59.240712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.240935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.240959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.364 [2024-07-11 23:45:59.240974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.364 [2024-07-11 23:45:59.241102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.364 [2024-07-11 23:45:59.241279] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.364 [2024-07-11 23:45:59.241300] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.364 [2024-07-11 23:45:59.241315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.364 [2024-07-11 23:45:59.243395] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.364 [2024-07-11 23:45:59.252542] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.364 [2024-07-11 23:45:59.252970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.253251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.253278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.364 [2024-07-11 23:45:59.253294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.364 [2024-07-11 23:45:59.253505] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.364 [2024-07-11 23:45:59.253621] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.364 [2024-07-11 23:45:59.253643] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.364 [2024-07-11 23:45:59.253657] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.364 [2024-07-11 23:45:59.255767] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.364 [2024-07-11 23:45:59.264946] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.364 [2024-07-11 23:45:59.265292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.265494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.265519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.364 [2024-07-11 23:45:59.265534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.364 [2024-07-11 23:45:59.265646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.364 [2024-07-11 23:45:59.265808] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.364 [2024-07-11 23:45:59.265829] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.364 [2024-07-11 23:45:59.265843] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.364 [2024-07-11 23:45:59.267844] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.364 [2024-07-11 23:45:59.277292] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.364 [2024-07-11 23:45:59.277731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.278023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.364 [2024-07-11 23:45:59.278047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.364 [2024-07-11 23:45:59.278062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.364 [2024-07-11 23:45:59.278233] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.364 [2024-07-11 23:45:59.278402] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.364 [2024-07-11 23:45:59.278437] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.365 [2024-07-11 23:45:59.278451] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.365 [2024-07-11 23:45:59.280553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.365 [2024-07-11 23:45:59.289674] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.365 [2024-07-11 23:45:59.290102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.365 [2024-07-11 23:45:59.290381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.365 [2024-07-11 23:45:59.290407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.365 [2024-07-11 23:45:59.290422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.365 [2024-07-11 23:45:59.290568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.365 [2024-07-11 23:45:59.290699] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.365 [2024-07-11 23:45:59.290720] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.365 [2024-07-11 23:45:59.290733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.365 [2024-07-11 23:45:59.292819] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.365 [2024-07-11 23:45:59.301889] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.365 [2024-07-11 23:45:59.302341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.365 [2024-07-11 23:45:59.302551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.365 [2024-07-11 23:45:59.302576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.365 [2024-07-11 23:45:59.302591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.365 [2024-07-11 23:45:59.302734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.365 [2024-07-11 23:45:59.302881] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.365 [2024-07-11 23:45:59.302901] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.365 [2024-07-11 23:45:59.302914] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.365 [2024-07-11 23:45:59.304970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.628 [2024-07-11 23:45:59.314148] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.628 [2024-07-11 23:45:59.314504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.628 [2024-07-11 23:45:59.314697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.628 [2024-07-11 23:45:59.314722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.628 [2024-07-11 23:45:59.314737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.628 [2024-07-11 23:45:59.314865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.628 [2024-07-11 23:45:59.315012] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.628 [2024-07-11 23:45:59.315032] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.628 [2024-07-11 23:45:59.315045] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.628 [2024-07-11 23:45:59.317028] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.628 [2024-07-11 23:45:59.326429] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.628 [2024-07-11 23:45:59.326849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.628 [2024-07-11 23:45:59.327094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.628 [2024-07-11 23:45:59.327119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.628 [2024-07-11 23:45:59.327134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.628 [2024-07-11 23:45:59.327274] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.628 [2024-07-11 23:45:59.327435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.628 [2024-07-11 23:45:59.327472] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.628 [2024-07-11 23:45:59.327486] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.628 [2024-07-11 23:45:59.329557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.628 [2024-07-11 23:45:59.338838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.628 [2024-07-11 23:45:59.339294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.628 [2024-07-11 23:45:59.339536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.628 [2024-07-11 23:45:59.339560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.628 [2024-07-11 23:45:59.339575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.628 [2024-07-11 23:45:59.339687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.628 [2024-07-11 23:45:59.339818] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.628 [2024-07-11 23:45:59.339838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.628 [2024-07-11 23:45:59.339852] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.628 [2024-07-11 23:45:59.341953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.628 [2024-07-11 23:45:59.351181] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.628 [2024-07-11 23:45:59.351609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.628 [2024-07-11 23:45:59.351819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.628 [2024-07-11 23:45:59.351843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.628 [2024-07-11 23:45:59.351858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.628 [2024-07-11 23:45:59.351970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.628 [2024-07-11 23:45:59.352101] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.628 [2024-07-11 23:45:59.352136] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.628 [2024-07-11 23:45:59.352163] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.628 [2024-07-11 23:45:59.354172] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.628 [2024-07-11 23:45:59.363584] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.628 [2024-07-11 23:45:59.363999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.628 [2024-07-11 23:45:59.364248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.628 [2024-07-11 23:45:59.364279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.628 [2024-07-11 23:45:59.364295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.628 [2024-07-11 23:45:59.364439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.628 [2024-07-11 23:45:59.364554] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.628 [2024-07-11 23:45:59.364574] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.628 [2024-07-11 23:45:59.364588] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.628 [2024-07-11 23:45:59.366736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.628 [2024-07-11 23:45:59.375973] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.628 [2024-07-11 23:45:59.376396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.628 [2024-07-11 23:45:59.376602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.628 [2024-07-11 23:45:59.376627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.628 [2024-07-11 23:45:59.376643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.628 [2024-07-11 23:45:59.376786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.628 [2024-07-11 23:45:59.376934] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.628 [2024-07-11 23:45:59.376954] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.628 [2024-07-11 23:45:59.376967] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.628 [2024-07-11 23:45:59.379093] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.628 [2024-07-11 23:45:59.388290] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.628 [2024-07-11 23:45:59.388719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.628 [2024-07-11 23:45:59.388962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.628 [2024-07-11 23:45:59.388987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.628 [2024-07-11 23:45:59.389002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.628 [2024-07-11 23:45:59.389175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.629 [2024-07-11 23:45:59.389278] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.629 [2024-07-11 23:45:59.389300] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.629 [2024-07-11 23:45:59.389314] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.629 [2024-07-11 23:45:59.391439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.629 [2024-07-11 23:45:59.400605] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.629 [2024-07-11 23:45:59.401030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.401252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.401278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.629 [2024-07-11 23:45:59.401299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.629 [2024-07-11 23:45:59.401447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.629 [2024-07-11 23:45:59.401594] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.629 [2024-07-11 23:45:59.401614] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.629 [2024-07-11 23:45:59.401627] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.629 [2024-07-11 23:45:59.403749] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.629 [2024-07-11 23:45:59.412881] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.629 [2024-07-11 23:45:59.413336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.413555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.413579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.629 [2024-07-11 23:45:59.413594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.629 [2024-07-11 23:45:59.413722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.629 [2024-07-11 23:45:59.413915] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.629 [2024-07-11 23:45:59.413936] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.629 [2024-07-11 23:45:59.413949] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.629 [2024-07-11 23:45:59.416013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.629 [2024-07-11 23:45:59.425342] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.629 [2024-07-11 23:45:59.425810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.426001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.426026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.629 [2024-07-11 23:45:59.426041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.629 [2024-07-11 23:45:59.426238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.629 [2024-07-11 23:45:59.426438] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.629 [2024-07-11 23:45:59.426459] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.629 [2024-07-11 23:45:59.426473] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.629 [2024-07-11 23:45:59.428691] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.629 [2024-07-11 23:45:59.437367] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.629 [2024-07-11 23:45:59.437808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.438077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.438102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.629 [2024-07-11 23:45:59.438117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.629 [2024-07-11 23:45:59.438280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.629 [2024-07-11 23:45:59.438416] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.629 [2024-07-11 23:45:59.438450] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.629 [2024-07-11 23:45:59.438464] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.629 [2024-07-11 23:45:59.440519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.629 [2024-07-11 23:45:59.449544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.629 [2024-07-11 23:45:59.449938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.450166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.450193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.629 [2024-07-11 23:45:59.450209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.629 [2024-07-11 23:45:59.450326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.629 [2024-07-11 23:45:59.450508] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.629 [2024-07-11 23:45:59.450529] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.629 [2024-07-11 23:45:59.450543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.629 [2024-07-11 23:45:59.452630] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.629 [2024-07-11 23:45:59.461896] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.629 [2024-07-11 23:45:59.462324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.462547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.462573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.629 [2024-07-11 23:45:59.462588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.629 [2024-07-11 23:45:59.462747] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.629 [2024-07-11 23:45:59.462878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.629 [2024-07-11 23:45:59.462898] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.629 [2024-07-11 23:45:59.462912] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.629 [2024-07-11 23:45:59.464978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.629 [2024-07-11 23:45:59.474217] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.629 [2024-07-11 23:45:59.474651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.474935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.474959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.629 [2024-07-11 23:45:59.474974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.629 [2024-07-11 23:45:59.475164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.629 [2024-07-11 23:45:59.475289] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.629 [2024-07-11 23:45:59.475311] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.629 [2024-07-11 23:45:59.475324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.629 [2024-07-11 23:45:59.477372] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.629 [2024-07-11 23:45:59.486610] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.629 [2024-07-11 23:45:59.487059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.487267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.487294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.629 [2024-07-11 23:45:59.487309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.629 [2024-07-11 23:45:59.487488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.629 [2024-07-11 23:45:59.487604] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.629 [2024-07-11 23:45:59.487624] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.629 [2024-07-11 23:45:59.487638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.629 [2024-07-11 23:45:59.489756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.629 [2024-07-11 23:45:59.498934] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.629 [2024-07-11 23:45:59.499391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.499669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.499695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.629 [2024-07-11 23:45:59.499711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.629 [2024-07-11 23:45:59.499870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.629 [2024-07-11 23:45:59.500034] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.629 [2024-07-11 23:45:59.500054] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.629 [2024-07-11 23:45:59.500067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.629 [2024-07-11 23:45:59.502340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.629 [2024-07-11 23:45:59.511133] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.629 [2024-07-11 23:45:59.511599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.511835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.629 [2024-07-11 23:45:59.511859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.629 [2024-07-11 23:45:59.511874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.629 [2024-07-11 23:45:59.512017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.630 [2024-07-11 23:45:59.512208] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.630 [2024-07-11 23:45:59.512236] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.630 [2024-07-11 23:45:59.512251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.630 [2024-07-11 23:45:59.514361] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.630 [2024-07-11 23:45:59.523616] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.630 [2024-07-11 23:45:59.524035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.630 [2024-07-11 23:45:59.524272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.630 [2024-07-11 23:45:59.524298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.630 [2024-07-11 23:45:59.524314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.630 [2024-07-11 23:45:59.524446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.630 [2024-07-11 23:45:59.524624] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.630 [2024-07-11 23:45:59.524644] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.630 [2024-07-11 23:45:59.524658] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.630 [2024-07-11 23:45:59.526610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.630 [2024-07-11 23:45:59.535936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.630 [2024-07-11 23:45:59.536342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.630 [2024-07-11 23:45:59.536564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.630 [2024-07-11 23:45:59.536589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.630 [2024-07-11 23:45:59.536604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.630 [2024-07-11 23:45:59.536780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.630 [2024-07-11 23:45:59.536942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.630 [2024-07-11 23:45:59.536962] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.630 [2024-07-11 23:45:59.536976] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.630 [2024-07-11 23:45:59.538992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.630 [2024-07-11 23:45:59.548242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.630 [2024-07-11 23:45:59.548658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.630 [2024-07-11 23:45:59.548921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.630 [2024-07-11 23:45:59.548946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.630 [2024-07-11 23:45:59.548961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.630 [2024-07-11 23:45:59.549162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.630 [2024-07-11 23:45:59.549331] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.630 [2024-07-11 23:45:59.549353] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.630 [2024-07-11 23:45:59.549371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.630 [2024-07-11 23:45:59.551586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.630 [2024-07-11 23:45:59.560409] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.630 [2024-07-11 23:45:59.560845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.630 [2024-07-11 23:45:59.561116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.630 [2024-07-11 23:45:59.561165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.630 [2024-07-11 23:45:59.561181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.630 [2024-07-11 23:45:59.561312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.630 [2024-07-11 23:45:59.561448] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.630 [2024-07-11 23:45:59.561469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.630 [2024-07-11 23:45:59.561482] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.630 [2024-07-11 23:45:59.563621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.630 [2024-07-11 23:45:59.572678] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.630 [2024-07-11 23:45:59.573031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.630 [2024-07-11 23:45:59.573272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.630 [2024-07-11 23:45:59.573298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.630 [2024-07-11 23:45:59.573313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.630 [2024-07-11 23:45:59.573441] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.630 [2024-07-11 23:45:59.573573] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.630 [2024-07-11 23:45:59.573594] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.630 [2024-07-11 23:45:59.573608] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.630 [2024-07-11 23:45:59.575654] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.893 [2024-07-11 23:45:59.584929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.893 [2024-07-11 23:45:59.585406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.893 [2024-07-11 23:45:59.585635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.893 [2024-07-11 23:45:59.585660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.893 [2024-07-11 23:45:59.585676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.893 [2024-07-11 23:45:59.585819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.893 [2024-07-11 23:45:59.585935] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.893 [2024-07-11 23:45:59.585955] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.893 [2024-07-11 23:45:59.585969] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.893 [2024-07-11 23:45:59.587981] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.893 [2024-07-11 23:45:59.597228] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.893 [2024-07-11 23:45:59.597679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.893 [2024-07-11 23:45:59.597916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.893 [2024-07-11 23:45:59.597941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.893 [2024-07-11 23:45:59.597956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.893 [2024-07-11 23:45:59.598068] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.893 [2024-07-11 23:45:59.598210] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.893 [2024-07-11 23:45:59.598233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.893 [2024-07-11 23:45:59.598247] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.893 [2024-07-11 23:45:59.600363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.893 [2024-07-11 23:45:59.609679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.893 [2024-07-11 23:45:59.610062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.893 [2024-07-11 23:45:59.610279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.893 [2024-07-11 23:45:59.610306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.893 [2024-07-11 23:45:59.610324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.893 [2024-07-11 23:45:59.610425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.893 [2024-07-11 23:45:59.610588] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.893 [2024-07-11 23:45:59.610609] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.893 [2024-07-11 23:45:59.610622] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.893 [2024-07-11 23:45:59.612867] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.893 [2024-07-11 23:45:59.621936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.893 [2024-07-11 23:45:59.622332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.893 [2024-07-11 23:45:59.622542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.893 [2024-07-11 23:45:59.622566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.893 [2024-07-11 23:45:59.622581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.893 [2024-07-11 23:45:59.622725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.893 [2024-07-11 23:45:59.622872] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.893 [2024-07-11 23:45:59.622892] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.893 [2024-07-11 23:45:59.622906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.893 [2024-07-11 23:45:59.625109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.893 [2024-07-11 23:45:59.634305] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.893 [2024-07-11 23:45:59.634721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.893 [2024-07-11 23:45:59.634913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.893 [2024-07-11 23:45:59.634938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.893 [2024-07-11 23:45:59.634952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.893 [2024-07-11 23:45:59.635165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.893 [2024-07-11 23:45:59.635335] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.893 [2024-07-11 23:45:59.635357] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.893 [2024-07-11 23:45:59.635371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.893 [2024-07-11 23:45:59.637563] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.893 [2024-07-11 23:45:59.646604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.893 [2024-07-11 23:45:59.646993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.893 [2024-07-11 23:45:59.647236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.893 [2024-07-11 23:45:59.647263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.893 [2024-07-11 23:45:59.647279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.893 [2024-07-11 23:45:59.647442] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.893 [2024-07-11 23:45:59.647577] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.893 [2024-07-11 23:45:59.647597] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.893 [2024-07-11 23:45:59.647610] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.893 [2024-07-11 23:45:59.649718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.893 [2024-07-11 23:45:59.658868] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.893 [2024-07-11 23:45:59.659297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.893 [2024-07-11 23:45:59.659612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.893 [2024-07-11 23:45:59.659636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.893 [2024-07-11 23:45:59.659651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.893 [2024-07-11 23:45:59.659763] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.893 [2024-07-11 23:45:59.659888] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.893 [2024-07-11 23:45:59.659908] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.893 [2024-07-11 23:45:59.659921] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.893 [2024-07-11 23:45:59.661992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.893 [2024-07-11 23:45:59.671290] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.894 [2024-07-11 23:45:59.671679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.671871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.671896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.894 [2024-07-11 23:45:59.671910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.894 [2024-07-11 23:45:59.672038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.894 [2024-07-11 23:45:59.672197] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.894 [2024-07-11 23:45:59.672219] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.894 [2024-07-11 23:45:59.672234] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.894 [2024-07-11 23:45:59.674387] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.894 [2024-07-11 23:45:59.683640] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.894 [2024-07-11 23:45:59.684092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.684370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.684397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.894 [2024-07-11 23:45:59.684412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.894 [2024-07-11 23:45:59.684589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.894 [2024-07-11 23:45:59.684752] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.894 [2024-07-11 23:45:59.684772] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.894 [2024-07-11 23:45:59.684785] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.894 [2024-07-11 23:45:59.686923] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.894 [2024-07-11 23:45:59.695937] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.894 [2024-07-11 23:45:59.696433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.696651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.696676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.894 [2024-07-11 23:45:59.696690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.894 [2024-07-11 23:45:59.696850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.894 [2024-07-11 23:45:59.697044] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.894 [2024-07-11 23:45:59.697065] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.894 [2024-07-11 23:45:59.697079] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.894 [2024-07-11 23:45:59.699209] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.894 [2024-07-11 23:45:59.708275] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.894 [2024-07-11 23:45:59.708689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.708936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.708965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.894 [2024-07-11 23:45:59.708981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.894 [2024-07-11 23:45:59.709146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.894 [2024-07-11 23:45:59.709323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.894 [2024-07-11 23:45:59.709344] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.894 [2024-07-11 23:45:59.709358] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.894 [2024-07-11 23:45:59.711410] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.894 [2024-07-11 23:45:59.720599] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.894 [2024-07-11 23:45:59.721023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.721238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.721265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.894 [2024-07-11 23:45:59.721281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.894 [2024-07-11 23:45:59.721461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.894 [2024-07-11 23:45:59.721572] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.894 [2024-07-11 23:45:59.721593] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.894 [2024-07-11 23:45:59.721605] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.894 [2024-07-11 23:45:59.723769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.894 [2024-07-11 23:45:59.732986] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.894 [2024-07-11 23:45:59.733392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.733600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.733625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.894 [2024-07-11 23:45:59.733640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.894 [2024-07-11 23:45:59.733783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.894 [2024-07-11 23:45:59.733945] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.894 [2024-07-11 23:45:59.733966] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.894 [2024-07-11 23:45:59.733979] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.894 [2024-07-11 23:45:59.736088] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.894 [2024-07-11 23:45:59.745482] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.894 [2024-07-11 23:45:59.745933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.746129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.746176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.894 [2024-07-11 23:45:59.746198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.894 [2024-07-11 23:45:59.746347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.894 [2024-07-11 23:45:59.746528] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.894 [2024-07-11 23:45:59.746548] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.894 [2024-07-11 23:45:59.746562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.894 [2024-07-11 23:45:59.748790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.894 [2024-07-11 23:45:59.757891] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.894 [2024-07-11 23:45:59.758256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.758546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.758571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.894 [2024-07-11 23:45:59.758587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.894 [2024-07-11 23:45:59.758783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.894 [2024-07-11 23:45:59.758935] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.894 [2024-07-11 23:45:59.758956] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.894 [2024-07-11 23:45:59.758970] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.894 [2024-07-11 23:45:59.760903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.894 [2024-07-11 23:45:59.770434] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.894 [2024-07-11 23:45:59.770824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 23:45:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:38.894 23:45:59 -- common/autotest_common.sh@852 -- # return 0 00:32:38.894 [2024-07-11 23:45:59.771034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.771059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.894 [2024-07-11 23:45:59.771074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.894 23:45:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:38.894 [2024-07-11 23:45:59.771217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.894 23:45:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:38.894 23:45:59 -- common/autotest_common.sh@10 -- # set +x 00:32:38.894 [2024-07-11 23:45:59.771418] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.894 [2024-07-11 23:45:59.771439] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.894 [2024-07-11 23:45:59.771453] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.894 [2024-07-11 23:45:59.773556] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.894 [2024-07-11 23:45:59.782473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.894 [2024-07-11 23:45:59.782921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.783229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.894 [2024-07-11 23:45:59.783260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.894 [2024-07-11 23:45:59.783276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.894 [2024-07-11 23:45:59.783392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.894 [2024-07-11 23:45:59.783564] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.895 [2024-07-11 23:45:59.783585] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.895 [2024-07-11 23:45:59.783599] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.895 [2024-07-11 23:45:59.785778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.895 [2024-07-11 23:45:59.794762] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.895 [2024-07-11 23:45:59.795155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.895 [2024-07-11 23:45:59.795330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.895 [2024-07-11 23:45:59.795356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.895 [2024-07-11 23:45:59.795371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.895 [2024-07-11 23:45:59.795560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.895 [2024-07-11 23:45:59.795738] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.895 [2024-07-11 23:45:59.795758] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.895 [2024-07-11 23:45:59.795772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.895 23:45:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:38.895 [2024-07-11 23:45:59.798068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.895 23:45:59 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:38.895 23:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.895 23:45:59 -- common/autotest_common.sh@10 -- # set +x 00:32:38.895 [2024-07-11 23:45:59.802307] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:38.895 [2024-07-11 23:45:59.807063] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.895 [2024-07-11 23:45:59.807450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.895 [2024-07-11 23:45:59.807718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.895 [2024-07-11 23:45:59.807743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.895 [2024-07-11 23:45:59.807759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.895 [2024-07-11 23:45:59.807891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.895 [2024-07-11 23:45:59.808026] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.895 [2024-07-11 23:45:59.808047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.895 [2024-07-11 23:45:59.808061] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.895 23:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.895 23:45:59 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:38.895 23:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.895 23:45:59 -- common/autotest_common.sh@10 -- # set +x 00:32:38.895 [2024-07-11 23:45:59.810164] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.895 [2024-07-11 23:45:59.819576] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.895 [2024-07-11 23:45:59.819990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.895 [2024-07-11 23:45:59.820265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.895 [2024-07-11 23:45:59.820292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.895 [2024-07-11 23:45:59.820307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.895 [2024-07-11 23:45:59.820485] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.895 [2024-07-11 23:45:59.820642] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.895 [2024-07-11 23:45:59.820663] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.895 [2024-07-11 23:45:59.820675] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.895 [2024-07-11 23:45:59.822722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.895 [2024-07-11 23:45:59.831956] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:38.895 [2024-07-11 23:45:59.832383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.895 [2024-07-11 23:45:59.832632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.895 [2024-07-11 23:45:59.832656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:38.895 [2024-07-11 23:45:59.832671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:38.895 [2024-07-11 23:45:59.832783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:38.895 [2024-07-11 23:45:59.832929] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:38.895 [2024-07-11 23:45:59.832950] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:38.895 [2024-07-11 23:45:59.832962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:38.895 [2024-07-11 23:45:59.834919] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.155 [2024-07-11 23:45:59.844303] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:39.155 [2024-07-11 23:45:59.844846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.155 [2024-07-11 23:45:59.845122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.155 [2024-07-11 23:45:59.845169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:39.155 [2024-07-11 23:45:59.845191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:39.155 [2024-07-11 23:45:59.845394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:39.155 [2024-07-11 23:45:59.845533] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:39.155 [2024-07-11 23:45:59.845555] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:39.155 [2024-07-11 23:45:59.845573] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:39.155 [2024-07-11 23:45:59.847803] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.155 Malloc0 00:32:39.155 23:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.155 23:45:59 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:39.155 23:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.155 23:45:59 -- common/autotest_common.sh@10 -- # set +x 00:32:39.155 [2024-07-11 23:45:59.856701] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:39.155 [2024-07-11 23:45:59.857218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.155 [2024-07-11 23:45:59.857468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.155 [2024-07-11 23:45:59.857509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:39.155 [2024-07-11 23:45:59.857526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:39.155 [2024-07-11 23:45:59.857669] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:39.155 [2024-07-11 23:45:59.857769] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:39.155 [2024-07-11 23:45:59.857789] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:39.155 [2024-07-11 23:45:59.857805] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:39.155 [2024-07-11 23:45:59.859936] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.155 23:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.155 23:45:59 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:39.155 23:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.155 23:45:59 -- common/autotest_common.sh@10 -- # set +x 00:32:39.155 [2024-07-11 23:45:59.868820] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:39.155 [2024-07-11 23:45:59.869254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.155 [2024-07-11 23:45:59.869415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.155 [2024-07-11 23:45:59.869455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2095860 with addr=10.0.0.2, port=4420 00:32:39.155 [2024-07-11 23:45:59.869471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095860 is same with the state(5) to be set 00:32:39.155 [2024-07-11 23:45:59.869598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095860 (9): Bad file descriptor 00:32:39.155 [2024-07-11 23:45:59.869749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:39.155 [2024-07-11 23:45:59.869771] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:39.155 [2024-07-11 23:45:59.869785] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:39.155 23:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.155 23:45:59 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:39.155 23:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.155 23:45:59 -- common/autotest_common.sh@10 -- # set +x 00:32:39.155 [2024-07-11 23:45:59.871993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.155 [2024-07-11 23:45:59.873919] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.155 23:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.155 23:45:59 -- host/bdevperf.sh@38 -- # wait 387291 00:32:39.155 [2024-07-11 23:45:59.881047] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:39.155 [2024-07-11 23:45:59.909641] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:47.266 00:32:47.266 Latency(us) 00:32:47.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.266 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.266 Verification LBA range: start 0x0 length 0x4000 00:32:47.266 Nvme1n1 : 15.01 9442.53 36.88 17124.01 0.00 4803.75 731.21 20388.98 00:32:47.266 =================================================================================================================== 00:32:47.266 Total : 9442.53 36.88 17124.01 0.00 4803.75 731.21 20388.98 00:32:47.266 23:46:08 -- host/bdevperf.sh@39 -- # sync 00:32:47.266 23:46:08 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.266 23:46:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:47.266 23:46:08 -- common/autotest_common.sh@10 -- # set +x 00:32:47.266 23:46:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:47.266 23:46:08 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:47.266 23:46:08 -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:47.266 23:46:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:47.266 23:46:08 -- nvmf/common.sh@116 -- # sync 00:32:47.266 23:46:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:47.266 23:46:08 -- nvmf/common.sh@119 -- # set +e 00:32:47.266 23:46:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:47.266 23:46:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:47.266 rmmod nvme_tcp 00:32:47.266 rmmod nvme_fabrics 00:32:47.266 rmmod nvme_keyring 00:32:47.266 23:46:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:47.266 23:46:08 -- nvmf/common.sh@123 -- # set -e 00:32:47.266 23:46:08 -- nvmf/common.sh@124 -- # return 0 00:32:47.266 23:46:08 -- nvmf/common.sh@477 -- # '[' -n 387980 ']' 00:32:47.266 23:46:08 -- nvmf/common.sh@478 -- # killprocess 387980 00:32:47.266 23:46:08 -- common/autotest_common.sh@926 -- # '[' -z 387980 ']' 00:32:47.266 23:46:08 -- common/autotest_common.sh@930 -- # kill -0 387980 00:32:47.266 23:46:08 -- common/autotest_common.sh@931 -- # uname 00:32:47.266 23:46:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:47.266 23:46:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 387980 00:32:47.266 23:46:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:47.266 23:46:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:47.266 23:46:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 387980' 00:32:47.266 killing process with pid 387980 00:32:47.266 23:46:08 -- common/autotest_common.sh@945 -- # kill 387980 00:32:47.266 23:46:08 -- common/autotest_common.sh@950 -- # wait 387980 00:32:47.524 23:46:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:47.524 23:46:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:47.524 23:46:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:47.524 23:46:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:47.524 23:46:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:47.524 23:46:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.524 23:46:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:47.524 23:46:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.055 23:46:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:50.055 00:32:50.055 real 0m23.208s 00:32:50.055 user 1m0.478s 00:32:50.055 sys 0m5.187s 00:32:50.055 23:46:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:50.055 23:46:10 -- common/autotest_common.sh@10 -- # set +x 00:32:50.055 ************************************ 00:32:50.055 END TEST nvmf_bdevperf 00:32:50.055 ************************************ 00:32:50.055 23:46:10 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:50.055 23:46:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:50.055 23:46:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:50.055 23:46:10 -- common/autotest_common.sh@10 -- # set +x 00:32:50.055 ************************************ 00:32:50.055 START TEST nvmf_target_disconnect 00:32:50.055 ************************************ 00:32:50.055 23:46:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:50.055 * Looking for test storage... 00:32:50.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:50.055 23:46:10 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.055 23:46:10 -- nvmf/common.sh@7 -- # uname -s 00:32:50.055 23:46:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.055 23:46:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.055 23:46:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.055 23:46:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.055 23:46:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.055 23:46:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.055 23:46:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.055 23:46:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.055 23:46:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.055 23:46:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.055 23:46:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:50.055 23:46:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:50.056 23:46:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.056 23:46:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.056 23:46:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.056 23:46:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.056 23:46:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.056 23:46:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.056 23:46:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.056 23:46:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.056 23:46:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.056 23:46:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.056 23:46:10 -- paths/export.sh@5 -- # export PATH 00:32:50.056 23:46:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.056 23:46:10 -- nvmf/common.sh@46 -- # : 0 00:32:50.056 23:46:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:50.056 23:46:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:50.056 23:46:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:50.056 23:46:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.056 23:46:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.056 23:46:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:50.056 23:46:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:50.056 23:46:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:50.056 23:46:10 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:50.056 23:46:10 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:50.056 23:46:10 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:50.056 23:46:10 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:32:50.056 23:46:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:50.056 23:46:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.056 23:46:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:50.056 23:46:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:50.056 23:46:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:50.056 23:46:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.056 23:46:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:50.056 23:46:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.056 23:46:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:50.056 23:46:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:50.056 23:46:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:50.056 23:46:10 -- common/autotest_common.sh@10 -- # set +x 00:32:52.629 23:46:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:52.629 23:46:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:52.629 23:46:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:52.629 23:46:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:52.629 23:46:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:52.629 23:46:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:52.629 23:46:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:52.629 23:46:13 -- nvmf/common.sh@294 -- # net_devs=() 00:32:52.629 23:46:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:52.629 23:46:13 -- nvmf/common.sh@295 -- # e810=() 00:32:52.629 23:46:13 -- nvmf/common.sh@295 -- # local -ga e810 00:32:52.629 23:46:13 -- nvmf/common.sh@296 -- # x722=() 00:32:52.630 23:46:13 -- nvmf/common.sh@296 -- # local -ga x722 00:32:52.630 23:46:13 -- nvmf/common.sh@297 -- # mlx=() 00:32:52.630 23:46:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:52.630 23:46:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.630 23:46:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.630 23:46:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.630 23:46:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.630 23:46:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.630 23:46:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.630 23:46:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.630 23:46:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.630 23:46:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.630 23:46:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.630 23:46:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.630 23:46:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:52.630 23:46:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:52.630 23:46:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:52.630 23:46:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:52.630 23:46:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:52.630 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:52.630 23:46:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:52.630 23:46:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:52.630 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:52.630 23:46:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:52.630 23:46:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:52.630 23:46:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.630 23:46:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:52.630 23:46:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.630 23:46:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:52.630 Found net devices under 0000:84:00.0: cvl_0_0 00:32:52.630 23:46:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.630 23:46:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:52.630 23:46:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.630 23:46:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:52.630 23:46:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.630 23:46:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:52.630 Found net devices under 0000:84:00.1: cvl_0_1 00:32:52.630 23:46:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.630 23:46:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:52.630 23:46:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:52.630 23:46:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:52.630 23:46:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:52.630 23:46:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:52.630 23:46:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:52.630 23:46:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:52.630 23:46:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:52.630 23:46:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:52.630 23:46:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:52.630 23:46:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:52.630 23:46:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:52.630 23:46:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:52.630 23:46:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:52.630 23:46:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:52.630 23:46:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:52.630 23:46:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:52.630 23:46:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:52.630 23:46:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:52.630 23:46:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:52.630 23:46:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:52.630 23:46:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:52.630 23:46:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:52.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:52.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:32:52.630 00:32:52.630 --- 10.0.0.2 ping statistics --- 00:32:52.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.630 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:32:52.630 23:46:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:52.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:52.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:32:52.630 00:32:52.630 --- 10.0.0.1 ping statistics --- 00:32:52.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.630 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:32:52.630 23:46:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:52.630 23:46:13 -- nvmf/common.sh@410 -- # return 0 00:32:52.630 23:46:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:52.630 23:46:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:52.630 23:46:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:52.630 23:46:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:52.630 23:46:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:52.630 23:46:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:52.630 23:46:13 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:52.630 23:46:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:52.630 23:46:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:52.630 23:46:13 -- common/autotest_common.sh@10 -- # set +x 00:32:52.630 ************************************ 00:32:52.630 START TEST nvmf_target_disconnect_tc1 00:32:52.630 ************************************ 00:32:52.630 23:46:13 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:32:52.630 23:46:13 -- host/target_disconnect.sh@32 -- # set +e 00:32:52.630 23:46:13 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:52.630 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.630 [2024-07-11 23:46:13.427970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.630 [2024-07-11 23:46:13.428370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.630 [2024-07-11 23:46:13.428420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b8e00 with addr=10.0.0.2, port=4420 00:32:52.630 [2024-07-11 23:46:13.428462] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:52.630 [2024-07-11 23:46:13.428492] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:52.630 [2024-07-11 23:46:13.428507] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:32:52.630 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:52.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:52.630 Initializing NVMe Controllers 00:32:52.630 23:46:13 -- host/target_disconnect.sh@33 -- # trap - ERR 00:32:52.630 23:46:13 -- host/target_disconnect.sh@33 -- # print_backtrace 00:32:52.630 23:46:13 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:32:52.630 23:46:13 -- common/autotest_common.sh@1132 -- # return 0 00:32:52.630 23:46:13 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:32:52.630 23:46:13 -- host/target_disconnect.sh@41 -- # set -e 00:32:52.630 00:32:52.630 real 0m0.146s 00:32:52.630 user 0m0.055s 00:32:52.630 sys 0m0.089s 00:32:52.630 23:46:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:52.630 23:46:13 -- common/autotest_common.sh@10 -- # set +x 00:32:52.630 ************************************ 00:32:52.630 END TEST nvmf_target_disconnect_tc1 00:32:52.630 ************************************ 00:32:52.630 23:46:13 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:52.630 23:46:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:52.630 23:46:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:52.630 23:46:13 -- common/autotest_common.sh@10 -- # set +x 00:32:52.630 ************************************ 00:32:52.630 START TEST nvmf_target_disconnect_tc2 00:32:52.630 ************************************ 00:32:52.630 23:46:13 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:32:52.630 23:46:13 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:32:52.630 23:46:13 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:52.630 23:46:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:52.630 23:46:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:52.630 23:46:13 -- common/autotest_common.sh@10 -- # set +x 00:32:52.630 23:46:13 -- nvmf/common.sh@469 -- # nvmfpid=391192 00:32:52.630 23:46:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:52.630 23:46:13 -- nvmf/common.sh@470 -- # waitforlisten 391192 00:32:52.630 23:46:13 -- common/autotest_common.sh@819 -- # '[' -z 391192 ']' 00:32:52.630 23:46:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.630 23:46:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:52.630 23:46:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.630 23:46:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:52.630 23:46:13 -- common/autotest_common.sh@10 -- # set +x 00:32:52.630 [2024-07-11 23:46:13.532837] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:52.631 [2024-07-11 23:46:13.532928] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.631 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.890 [2024-07-11 23:46:13.640420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:52.890 [2024-07-11 23:46:13.793679] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:52.890 [2024-07-11 23:46:13.793854] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:52.890 [2024-07-11 23:46:13.793873] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:52.890 [2024-07-11 23:46:13.793887] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:52.890 [2024-07-11 23:46:13.793977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:52.890 [2024-07-11 23:46:13.794033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:52.890 [2024-07-11 23:46:13.794086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:52.890 [2024-07-11 23:46:13.794089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:53.826 23:46:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:53.826 23:46:14 -- common/autotest_common.sh@852 -- # return 0 00:32:53.826 23:46:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:53.826 23:46:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:53.826 23:46:14 -- common/autotest_common.sh@10 -- # set +x 00:32:53.826 23:46:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.826 23:46:14 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:53.826 23:46:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:53.826 23:46:14 -- common/autotest_common.sh@10 -- # set +x 00:32:53.826 Malloc0 00:32:53.826 23:46:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:53.826 23:46:14 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:53.826 23:46:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:53.826 23:46:14 -- common/autotest_common.sh@10 -- # set +x 00:32:53.826 [2024-07-11 23:46:14.625978] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.826 23:46:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:53.826 23:46:14 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:53.826 23:46:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:53.826 23:46:14 -- common/autotest_common.sh@10 -- # set +x 00:32:53.826 23:46:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:53.826 23:46:14 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:53.826 23:46:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:53.826 23:46:14 -- common/autotest_common.sh@10 -- # set +x 00:32:53.826 23:46:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:53.826 23:46:14 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:53.826 23:46:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:53.826 23:46:14 -- common/autotest_common.sh@10 -- # set +x 00:32:53.826 [2024-07-11 23:46:14.658477] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:53.826 23:46:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:53.826 23:46:14 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:53.826 23:46:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:53.826 23:46:14 -- common/autotest_common.sh@10 -- # set +x 00:32:53.826 23:46:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:53.826 23:46:14 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:53.826 23:46:14 -- host/target_disconnect.sh@50 -- # reconnectpid=391350 00:32:53.826 23:46:14 -- host/target_disconnect.sh@52 -- # sleep 2 00:32:53.826 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.726 23:46:16 -- host/target_disconnect.sh@53 -- # kill -9 391192 00:32:55.726 23:46:16 -- host/target_disconnect.sh@55 -- # sleep 2 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 [2024-07-11 23:46:16.685012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Write completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 [2024-07-11 23:46:16.685440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.995 starting I/O failed 00:32:55.995 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Write completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Write completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Write completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Write completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Write completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 [2024-07-11 23:46:16.685772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Write completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Write completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Write completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Write completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Write completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Write completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Write completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Write completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 Read completed with error (sct=0, sc=8) 00:32:55.996 starting I/O failed 00:32:55.996 [2024-07-11 23:46:16.686218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:55.996 [2024-07-11 23:46:16.686456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.686728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.686796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.687115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.687359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.687385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.687592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.687788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.687851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.688174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.688373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.688399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.688648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.688889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.688954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.689256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.689456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.689520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.689782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.689964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.690027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.690280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.690498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.690563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.690847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.691067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.691131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.691374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.691574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.691637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.691953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.692238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.692264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.692473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.692684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.692747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.693049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.693338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.693363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.693553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.693747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.693809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.694113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.694368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.694394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.694603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.694844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.694907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.695203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.695346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.695371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.695559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.695748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.695812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.696097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.696323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.696348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.696497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.696702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.696765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.697046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.697320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.697346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.697546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.697718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.697782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.698086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.698383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.698409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.698607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.698829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.698891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-07-11 23:46:16.699176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-07-11 23:46:16.699368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.699394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.699608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.699845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.699908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.700162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.700374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.700399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.700675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.700918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.700981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.701268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.701473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.701535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.701795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.701982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.702045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.702355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.702576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.702640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.702946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.703224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.703250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.703447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.703656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.703719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.704030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.704281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.704307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.704501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.704704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.704767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.705063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.705380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.705405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.705637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.705851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.705913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.706191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.706374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.706402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.706683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.706895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.706959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.707237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.707426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.707489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.707793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.708013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.708076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.708373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.708586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.708650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.708932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.709178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.709235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.709421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.709592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.709655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.709912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.710097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.710175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.710385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.710612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.710675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.710928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.711101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.711183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.711428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.711652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.711715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.711972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.712225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.712254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.712439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.712622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.712685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.712975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.713203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.713259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.713417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.713630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.713693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.713970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.714190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.714237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.714445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.714643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.714706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.714965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.715180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.715236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.715433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.715622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.715686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.715969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.716185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.716237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.716422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.716637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.716700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.716950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.717124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.717214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.717427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.717626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.717689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.717979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.718185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.718238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.718420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.718642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.718705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.718959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.719157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.719223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.719436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.719682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.719746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.720022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.720235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-07-11 23:46:16.720265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-07-11 23:46:16.720497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.720693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.720756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.721042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.721232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.721261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.721445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.721615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.721679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.721961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.722176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.722231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.722443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.722633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.722697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.722981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.723217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.723247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.723434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.723658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.723722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.724015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.724230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.724259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.724474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.724677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.724741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.724990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.725172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.725229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.725383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.725606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.725670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.725957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.726178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.726235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.726443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.726627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.726706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.726963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.727201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.727230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.727447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.727672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.727735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.728020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.728225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.728291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.728552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.728787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.728851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.729130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.729350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.729414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.729696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.729876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.729938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.730205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.730405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.730468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.730756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.730963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.731026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.731295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.731469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.731520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.731802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.732013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.732076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.732313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.732528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.732592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.732870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.733052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.733114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.733400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.733632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.733696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.733983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.734188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.734238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.734455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.734651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.734714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.734974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.735213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.735242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.735462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.735690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.735754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.736032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.736236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.736265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.736463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.736672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.736736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.737019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.737187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.737238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.737443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.737667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.737730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-07-11 23:46:16.737957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.738194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-07-11 23:46:16.738243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.738458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.738677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.738741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.739019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.739234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.739263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.739481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.739709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.739773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.740052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.740239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.740268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.740452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.740617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.740681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.740958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.741194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.741246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.741453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.741647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.741710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.741988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.742195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.742224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.742452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.742655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.742718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.742998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.743239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.743304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.743587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.743811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.743875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.744134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.744392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.744456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.744714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.744910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.744974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.745229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.745393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.745456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.745741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.745975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.746038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.746333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.746562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.746626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.746903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.747079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.747159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.747433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.747695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.747758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.748040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.748244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.748272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.748455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.748674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.748738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.749051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.749333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.749407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.749660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.749936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.750000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.750282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.750478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.750542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.750798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.750974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.751037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.751322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.751554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.751617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.751868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.752091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.752171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.752392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.752633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.752697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.752976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.753216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.753245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.753459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.753640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.753703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.753966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.754204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.754250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.754464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.754678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.754751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.755011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.755237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.755265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.755458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.755700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.755764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.756041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.756223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.756252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.756460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.756690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.756753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.757038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.757255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.757284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.757500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.757727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.757790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.758047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.758233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.758261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.758442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.758688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.758750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-07-11 23:46:16.758997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.759166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-07-11 23:46:16.759230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.759524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.759746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.759819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.760107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.760318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.760346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.760556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.760797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.760861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.761155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.761362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.761390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.761645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.761856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.761920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.762210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.762408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.762471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.762748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.762936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.762999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.763293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.763515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.763579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.763854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.764052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.764115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.764412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.764634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.764697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.764972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.765199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.765253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.765471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.765680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.765743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.765993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.766177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.766229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.766463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.766719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.766783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.767041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.767240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.767269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.767458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.767663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.767725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.767952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.768207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.768236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.768421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.768628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.768691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.768963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.769213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.769242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.769451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.769681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.769744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.770028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.770246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.770276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.770466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.770637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.770700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.770992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.771200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.771228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.771447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.771692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.771755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.772039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.772227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.772291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.772546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.772759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.772822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.773115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.773356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.773421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.773704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.773912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.773976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.774233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.774456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.774520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.774805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.775001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.775064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.775356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.775576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.775639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.775912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.776118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.776196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.776431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.776643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.776705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.776994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.777223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.777252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.777447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.777666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.777729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.777988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.778222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.778251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-07-11 23:46:16.778464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-07-11 23:46:16.778668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.778730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.779013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.779213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.779243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.779456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.779695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.779758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.780045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.780255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.780284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.780495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.780691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.780754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.780999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.781222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.781251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.781472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.781676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.781738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.782002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.782226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.782256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.782437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.782672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.782735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.783027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.783256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.783284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.783508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.783728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.783791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.784045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.784238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.784267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.784452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.784659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.784722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.785011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.785236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.785264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.785476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.785673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.785736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.786011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.786181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.786271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.786527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.786735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.786798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.787077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.787298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.787327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.787518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.787735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.787798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.788052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.788246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.788275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.788468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.788664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.788727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.789023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.789215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.789244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.789467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.789712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.789774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.790055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.790247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-07-11 23:46:16.790276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-07-11 23:46:16.790463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.790654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.790717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.791003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.791246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.791274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.791485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.791702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.791765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.792043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.792255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.792284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.792430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.792659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.792722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.792973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.793243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.793272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.793477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.793688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.793750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.794025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.794250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.794279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.794489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.794692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.794755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.795042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.795332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.795361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.795559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.795788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.795850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.796116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.796303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.796331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.796513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.796706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.796769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.797030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.797323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.797352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.797566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.797772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.797835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.798092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.798289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.798318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.798531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.798761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.798824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.799088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.799381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.799409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.799619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.799858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.799921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.800198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.800423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.800487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.800744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.800941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.801004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.801283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.801480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.801543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.801823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.802022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.802086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.802372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.802548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.802612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-07-11 23:46:16.802829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.803069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-07-11 23:46:16.803133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.803403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.803692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.803755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.804046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.804251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.804280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.804490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.804693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.804756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.805009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.805196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.805243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.805409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.805632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.805695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.805971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.806176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.806235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.806451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.806685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.806748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.807034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.807239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.807268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.807489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.807756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.807819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.808108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.808337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.808365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.808550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.808752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.808815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.809092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.809317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.809347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.809532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.809784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.809848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.810128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.810380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.810434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.810688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.810887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.810950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.811243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.811470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.811533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.811788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.812086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.812162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.812409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.812642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.812705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.812962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.813130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.813215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.813400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.813619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-07-11 23:46:16.813683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-07-11 23:46:16.813967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.814211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.814240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.814449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.814659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.814723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.814992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.815199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.815263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.815546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.815776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.815839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.816119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.816411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.816474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.816767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.817017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.817080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.817380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.817623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.817686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.817975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.818157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.818226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.818404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.818645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.818708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.818961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.819190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.819236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.819448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.819618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.819681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.819934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.820168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.820231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.820465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.820761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.820823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.821125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.821336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.821365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.821578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.821770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.821834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.822092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.822305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.822334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.822534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.822823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.822885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.823167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.823404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.823484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.823761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.823989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.824051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.824337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.824529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.824592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.824876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.825132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.825227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.825450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.825676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.825739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.825996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.826178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.826236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-07-11 23:46:16.826419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.826650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-07-11 23:46:16.826713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.826995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.827264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.827294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.827478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.827684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.827749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.827993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.828218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.828247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.828456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.828679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.828743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.829009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.829261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.829289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.829499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.829688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.829751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.830026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.830299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.830328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.830536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.830728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.830756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.830944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.831153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.831182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.831378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.831583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.831611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.831832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.832022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.832052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.832232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.832393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.832422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.832604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.832801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.832834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.833027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.833225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.833254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.833439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.833597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.833625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.833851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.834050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.834084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.834267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.834433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.834462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.834681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.834889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.834953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.835216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.835378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.835448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.835729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.835986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.836049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.836326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.836522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.836585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.836857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.837108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.837209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-07-11 23:46:16.837371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.837564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-07-11 23:46:16.837638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.837934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.838221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.838250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.838427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.838671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.838735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.839027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.839304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.839333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.839549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.839755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.839819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.840100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.840380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.840408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.840690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.840918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.840981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.841230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.841409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.841444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.841736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.841936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.842000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.842267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.842471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.842534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.842821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.843068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.843155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.843389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.843590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.843652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.843934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.844167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.844216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.844403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.844680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.844743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.845008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.845286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.845315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.845528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.845775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.845838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.846092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.846256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.846285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.846482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.846738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.846801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.847090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.847356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.847385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.847564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.847768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.847831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.848116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.848286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.848320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.848526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.848806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.848869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.849107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.849366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-07-11 23:46:16.849394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-07-11 23:46:16.849578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.849811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.849873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.850166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.850377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.850430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.850711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.850971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.851034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.851310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.851534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.851596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.851880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.852051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.852113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.852406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.852624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.852687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.852980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.853232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.853261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.853479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.853667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.853730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.854004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.854234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.854263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.854447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.854664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.854726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.854963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.855241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.855270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.855459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.855671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.855734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.856013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.856214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.856279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.856523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.856718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.856780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.857032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.857322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.857351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.857571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.857805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.857867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.858163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.858342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.858371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.858666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.858859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.858923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.859231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.859461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.859525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.859815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.860033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.860096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.860389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.860591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.860654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.860940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.861168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.861215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.861402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.861668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-07-11 23:46:16.861731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-07-11 23:46:16.861994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.862236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.862266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.862434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.862634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.862697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.862977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.863169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.863237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.863448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.863671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.863735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.864017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.864199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.864246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.864416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.864618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.864681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.864972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.865201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.865230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.865460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.865740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.865803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.866097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.866336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.866365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.866569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.866764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.866827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.867079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.867261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.867290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.867501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.867781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.867844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.868145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.868348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.868376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.868619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.868807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.868870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.869151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.869359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.869387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.869705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.869965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.870028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.870306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.870466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.870495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.870794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.870985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.871048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-07-11 23:46:16.871336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.871564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-07-11 23:46:16.871626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.871881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.872170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.872232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.872384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.872625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.872690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.872946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.873160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.873227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.873415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.873609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.873672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.873950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.874233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.874263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.874440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.874670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.874733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.875028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.875240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.875268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.875430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.875667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.875729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.876010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.876290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.876318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.876541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.876742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.876805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.877094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.877311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.877340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.877551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.877773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.877835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.878116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.878412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.878477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.878762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.878954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.879017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.879273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.879493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.879554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.879853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.880091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.880169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.880384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.880597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.880660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.880946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.881178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.881226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.881438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.881630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.881693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.881994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.882217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.882246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.882461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.882716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.882779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.883065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.883304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.883332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-07-11 23:46:16.883519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.883706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-07-11 23:46:16.883769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.884020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.884235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.884264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.884467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.884722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.884785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.885077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.885306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.885335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00c8000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.885551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.885780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.885811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.885993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.886190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.886219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.886454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.886658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.886686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.886867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.887060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.887109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.887302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.887526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.887573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.887754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.887974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.888024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.888204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.888386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.888413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.888580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.888817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.888865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.889042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.889251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.889280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.889488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.889684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.889735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.889930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.890136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.890179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.890368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.890594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.890645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.890875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.891113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.891146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.891299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.891489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.891539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.891745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.891995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.892045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.892253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.892457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.892506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.892695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.892911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.892958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.893167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.893366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.893412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.893637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.893863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.893911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.894117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.894303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.894331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.894507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.894748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.894801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-07-11 23:46:16.895013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.895226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-07-11 23:46:16.895254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.895487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.895705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.895756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.895942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.896149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.896177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.896391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.896604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.896654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.896847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.897020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.897046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.897231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.897472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.897523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.897695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.897911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.897960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.898169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.898379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.898426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.898652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.898850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.898898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.899102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.899298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.899330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.899526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.899737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.899785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.900013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.900245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.900273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.900427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.900646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.900696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.900913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.901113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.901149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.901355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.901560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.901610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.901833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.902067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.902095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.902292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.902485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.902535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.902764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.902983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.903030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.903215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.903430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.903480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.903700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.903952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.904000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.904220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.904434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.904494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.904730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.904969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.904996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.905213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.905434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.905486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.905671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.905885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.905933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.906111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.906295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.906340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.906539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.906747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.906797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-07-11 23:46:16.907007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-07-11 23:46:16.907206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.907264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.907494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.907713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.907763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.907974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.908185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.908213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.908441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.908702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.908749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.908965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.909167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.909196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.909411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.909612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.909663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.909867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.910074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.910101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.910299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.910534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.910584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.910764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.910987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.911036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.911219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.911466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.911517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.911738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.911962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.912011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.912208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.912443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.912499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.912721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.912967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.913016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.913211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.913465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.913519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.913729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.913925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.913976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.914186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.914372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.914420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.914632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.914846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.914895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.915111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.915302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.915330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.915473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.915665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.915717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.915911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.916153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.916181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.916358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.916550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.916601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.916824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.917025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.917052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.917230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.917432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.917496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.917700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.917917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.917970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-07-11 23:46:16.918176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.918370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-07-11 23:46:16.918419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.918647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.918898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.918946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.919152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.920340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.920375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.920585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.920808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.920859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.921065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.921259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.921289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.921529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.921745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.921794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.921999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.922210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.922238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.922418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.922661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.922710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.922932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.923103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.923130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.923319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.923545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.923596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.923775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.923988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.924042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.924271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.924531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.924581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.924792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.924959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.924985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.925170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.925368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.925414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.925591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.925773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.925821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.926028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.926200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.926235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.926421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.926639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.926689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.926907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.927108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.927135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.927294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.927470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.927526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.927752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.927987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.928014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.928196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.928412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.928468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.928695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.928908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.928958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.929170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.929365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.929414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.929606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.929853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.929903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.930083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.930252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-07-11 23:46:16.930280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-07-11 23:46:16.930481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.930680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.930730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-07-11 23:46:16.930938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.931124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.931157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-07-11 23:46:16.931322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.931513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.931565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-07-11 23:46:16.931758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.932003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.932061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-07-11 23:46:16.932248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.932410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.932456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-07-11 23:46:16.932640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.932790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.932843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-07-11 23:46:16.932992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.933224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.933260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-07-11 23:46:16.933447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.933672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-07-11 23:46:16.933700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.284 [2024-07-11 23:46:16.933910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.284 [2024-07-11 23:46:16.934092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.284 [2024-07-11 23:46:16.934120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.284 qpair failed and we were unable to recover it. 00:32:56.284 [2024-07-11 23:46:16.934349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.284 [2024-07-11 23:46:16.934582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.284 [2024-07-11 23:46:16.934632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.284 qpair failed and we were unable to recover it. 00:32:56.284 [2024-07-11 23:46:16.934837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.935068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.935095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.935318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.935500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.935527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.935712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.935903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.935930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.936145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.936355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.936383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.936578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.936759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.936786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.936946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.937124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.937165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.937399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.937631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.937682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.937867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.938042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.938069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.938280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.938472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.938523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.938715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.938940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.938990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.939213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.939396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.939441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.939600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.939788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.939838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.940020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.940168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.940196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.940377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.940598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.940648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.940841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.941012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.941039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.941246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.941439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.941495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.941694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.941908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.941935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.942114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.942321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.942368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.942570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.942822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.942868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.943078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.943288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.943316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.943517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.943724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.943774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.943970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.944231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.944279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.944470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.944696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.944747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.944919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.945101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.945128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.945323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.945543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.945590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.945810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.946010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.946037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.946248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.946465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.946527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.946716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.946917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.946966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.947167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.947377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.947405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.947607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.947827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.947877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.948079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.948243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.948271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.948488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.948687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.948736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.948941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.949165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.949193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.949407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.949625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.949674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.949896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.950096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.950122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.950324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.950484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.950535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.950731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.950921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.950970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.951164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.951322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.951368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.951569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.951811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.951860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.952039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.952222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.952249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.952466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.952684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.952733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.285 qpair failed and we were unable to recover it. 00:32:56.285 [2024-07-11 23:46:16.952923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-11 23:46:16.953120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.953153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.953337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.953514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.953565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.953708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.953927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.953977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.954187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.954368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.954416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.954606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.954783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.954832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.955039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.955230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.955258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.955410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.955601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.955652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.955868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.956077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.956103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.956265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.956435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.956505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.956729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.956969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.957019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.957205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.957397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.957443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.957649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.957870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.957920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.958093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.958321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.958368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.958566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.958814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.958864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.959046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.959237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.959284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.959485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.959737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.959786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.959999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.960192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.960237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.960419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.960666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.960716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.960909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.961090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.961116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.961319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.961509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.961558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.961722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.961940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.961988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.962207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.962413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.962458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.962664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.962862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.962911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.963111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.963306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.963350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.963579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.963779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.963827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.964018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.964191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.964225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.964456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.964656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.964706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.964879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.965106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.965133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.965335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.965557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.965604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.965836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.966064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.966091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.966260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.966411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.966437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.966639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.966861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.966910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.967110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.967282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.967310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.967543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.967771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.967819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.968036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.968195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.968224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.968438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.968663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.968712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.968883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.969068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.969099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.969266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.969521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.969570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.969777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.969956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.969983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.970136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.970337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.970382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.970597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.970815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.970859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.286 qpair failed and we were unable to recover it. 00:32:56.286 [2024-07-11 23:46:16.971060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-11 23:46:16.971269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.971296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.971454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.971644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.971692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.971883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.972091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.972117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.972334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.972505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.972554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.972752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.972988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.973039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.973225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.973440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.973498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.973729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.973982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.974032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.974261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.974472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.974520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.974714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.974924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.974950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.975135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.975334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.975361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.975562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.975807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.975859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.976050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.976230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.976258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.976456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.976673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.976724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.976901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.977084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.977111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.977327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.977533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.977581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.977742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.977951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.978003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.978213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.978445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.978509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.978718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.978914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.978965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.979123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.979318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.979364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.979526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.979732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.979782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.979969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.980154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.980181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.980365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.980550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.980602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.980782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.981013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.981040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.981197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.981441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.981489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.981683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.981941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.982002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.982192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.982377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.982424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.982654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.982849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.982899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.983083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.983241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.983288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.983469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.983697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.983747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.983955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.984104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.984131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.984301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.984516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.984564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.984752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.984943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.984994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.985227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.985406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.985452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.985654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.985844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.985892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.986095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.986287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.986334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.986553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.986740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.986791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.986974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.987173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.987202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.987360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.987545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.987608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.987799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.987992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.988019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.988197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.988445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.988495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.988678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.988898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.988949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.287 qpair failed and we were unable to recover it. 00:32:56.287 [2024-07-11 23:46:16.989129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-11 23:46:16.989343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.989370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.989561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.989808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.989856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.990079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.990264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.990293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.990468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.990720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.990769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.990953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.991128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.991163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.991375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.991578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.991632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.991818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.991985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.992012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.992226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.992453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.992512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.992694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.992881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.992928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.993135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.993339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.993386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.993574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.993781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.993829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.994007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.994209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.994276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.994484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.994669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.994716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.994927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.995156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.995184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.995403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.995624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.995672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.995867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.996108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.996149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.996313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.996503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.996553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.996741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.996925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.996974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.997180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.997422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.997484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.997716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.997893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.997944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.998152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.998305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.998332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.998500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.998728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.998777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.998938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.999169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.999196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.999375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.999588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:16.999636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:16.999845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.000025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.000052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:17.000239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.000440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.000496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:17.000695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.000908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.000957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:17.001159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.001371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.001399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:17.001577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.001739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.001788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:17.001977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.002181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.002226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:17.002396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.002637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.002688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:17.002882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.003080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.003107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:17.003286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.003451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.003501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:17.003693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.003887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.003937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:17.004130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.004333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.004385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.288 qpair failed and we were unable to recover it. 00:32:56.288 [2024-07-11 23:46:17.004624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-11 23:46:17.004838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.004887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.005077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.005268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.005296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.005493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.005746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.005797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.006006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.006210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.006258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.006446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.006642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.006691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.006876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.007086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.007113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.007339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.007536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.007585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.007783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.007982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.008009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.008167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.008366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.008418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.008621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.008782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.008829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.009037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.009212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.009259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.009452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.009648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.009699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.009905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.010133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.010168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.010353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.010542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.010594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.010783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.010985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.011012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.011209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.011438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.011493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.011704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.011905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.011954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.012131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.012320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.012349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.012543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.012804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.012851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.013029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.013209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.013258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.013473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.013671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.013720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.013927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.014113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.014147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.014356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.014518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.014568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.014789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.015018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.015045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.015228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.015455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.015511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.015728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.015956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.016007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.016203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.016457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.016514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.016723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.016940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.016991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.017217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.017473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.017533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.017715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.017932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.017981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.018210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.018444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.018501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.018721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.018965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.019018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.019256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.019426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.019470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.019651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.019840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.019891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.020071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.020254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.020301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.020528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.020774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.020822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.021004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.021201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.021235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.021433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.021647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.021694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.021911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.022117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.022151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.022352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.022591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.022638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.022839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.023038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.023066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.023276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.023516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.289 [2024-07-11 23:46:17.023567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.289 qpair failed and we were unable to recover it. 00:32:56.289 [2024-07-11 23:46:17.023774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.023999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.024026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.024237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.024434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.024491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.024715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.024965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.025012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.025172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.025348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.025393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.025620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.025840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.025889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.026096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.026274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.026320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.026518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.026728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.026775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.026956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.027158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.027186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.027376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.027619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.027668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.027878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.028077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.028104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.028305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.028517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.028568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.028755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.028950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.029000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.029181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.029373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.029419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.029643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.029845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.029894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.030098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.030326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.030372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.030580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.030768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.030817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.031036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.031229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.031280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.031516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.031726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.031776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.031964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.032118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.032153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.032348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.032540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.032588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.032795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.033025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.033052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.033257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.033461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.033510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.033745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.033964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.034013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.034194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.034405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.034447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.034665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.034879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.034928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.035136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.035347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.035374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.035595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.035814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.035865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.036048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.036230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.036259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.036428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.036657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.036705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.036923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.037122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.037156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.037374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.037572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.037620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.037800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.038003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.038030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.038213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.038360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.038405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.038610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.038857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.038905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.039087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.039291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.039338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.039552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.039776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.039825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.040009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.040216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.040263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.040480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.040672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.040720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.040946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.041167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.041195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.041410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.041604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.041658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.041886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.042088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.042120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.290 qpair failed and we were unable to recover it. 00:32:56.290 [2024-07-11 23:46:17.042280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.290 [2024-07-11 23:46:17.042448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.042475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.042692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.042943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.042993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.043203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.043437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.043494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.043694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.043936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.043984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.044193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.044424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.044475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.044680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.044894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.044942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.045155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.045376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.045422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.045639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.045890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.045941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.046137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.046397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.046425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.046644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.046896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.046945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.047130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.047318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.047346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.047539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.047754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.047802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.048011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.048236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.048265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.048453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.048671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.048721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.048927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.049069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.049096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.049293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.049560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.049607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.049797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.050022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.050050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.050260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.050475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.050529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.050710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.050896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.050947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.051154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.051339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.051385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.051604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.051841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.051891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.052100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.052318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.052347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.052566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.052792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.052842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.053051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.053258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.053287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.053509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.053723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.053773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.053976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.054209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.054243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.054442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.054604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.054655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.054851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.055021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.055048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.055258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.055464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.055517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.055740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.055943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.055970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.056154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.056302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.056330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.056526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.056725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.056772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.056984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.057211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.057260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.057457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.057678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.057728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.057935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.058146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.058174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.058336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.058532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.058581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-07-11 23:46:17.058802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.059026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-07-11 23:46:17.059053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.059239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.059457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.059505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.059717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.059944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.059993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.060203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.060386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.060431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.060652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.060861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.060910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.061144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.061364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.061392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.061567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.061732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.061782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.061992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.062179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.062225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.062422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.062640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.062690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.062920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.063094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.063120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.063309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.063528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.063579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.063780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.064003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.064030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.064218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.064478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.064528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.064716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.064943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.064992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.065174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.065381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.065431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.065622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.065835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.065883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.066092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.066285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.066330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.066563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.066778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.066827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.067009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.067208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.067256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.067463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.067659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.067708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.067868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.068067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.068094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.068282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.068531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.068579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.068791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.068995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.069022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.069233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.069442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.069506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.069723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.069945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.070002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.070186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.070405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.070449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.070736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.070972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.071022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.071206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.071447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.071498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.071722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.071929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.071956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.072162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.072385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.072437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.072663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.072918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.072968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.073216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.073447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.073497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.073757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.074017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.074067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.074332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.074584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.074634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.074905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.075117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.075150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.075378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.075591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.075637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.075862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.076061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.076088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.076345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.076564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.076612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.076958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.077211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.077266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.077575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.077840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.077887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.078176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.078415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.078442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-07-11 23:46:17.078704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.079000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-07-11 23:46:17.079049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.079244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.079494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.079544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.079727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.080022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.080080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.080348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.080572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.080621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.080910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.081178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.081206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.081456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.081681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.081728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.081991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.082254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.082282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.082573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.082781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.082831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.083064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.083312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.083340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.083590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.083847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.083903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.084114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.084352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.084381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.084613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.084841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.084892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.085176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.085422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.085449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.085689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.085948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.086003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.086223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.086457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.086505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.086739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.087022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.087083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.087473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.087784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.087834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.088077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.088283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.088313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.088581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.088881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.088930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.089189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.089368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.089395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.089645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.089942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.089991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.090242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.090435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.090481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.090781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.091040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.091067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.091322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.091517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.091566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.091820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.092086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.092113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.092338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.092564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.092612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.092887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.093169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.093198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.093446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.093658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.093708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.093964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.094191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.094219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.094592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.094923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.094977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.095232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.095479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.095527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.095770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.095972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.096018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.096247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.096512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.096564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.096801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.097020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.097048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.097298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.097554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.097607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.097858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.098106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.098133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.098362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.098614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.098668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.098946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.099222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.099250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.099494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.099760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.099813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.100059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.100319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.100349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.100588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.100822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.100865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-07-11 23:46:17.101165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.101439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-07-11 23:46:17.101467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.101714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.101987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.102038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.102322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.102614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.102667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.102926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.103233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.103261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.103522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.103742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.103793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.104019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.104226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.104256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.104474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.104763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.104812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.105019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.105274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.105303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.105576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.105821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.105871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.106152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.106361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.106388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.106640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.106942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.106993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.107236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.107481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.107530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.107771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.108085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.108136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.108386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.108640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.108688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.108914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.109251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.109279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.109530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.109793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.109843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.110137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.110380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.110407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.110651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.110914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.110972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.111233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.111404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.111431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.111691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.112006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.112055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.112512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.112830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.112878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.113077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.113331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.113360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.113591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.113879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.113928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.114214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.114471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.114498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-07-11 23:46:17.114785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.115048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-07-11 23:46:17.115098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-07-11 23:46:17.115426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.115646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.115697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-07-11 23:46:17.116069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.116358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.116388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-07-11 23:46:17.116688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.116952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.117004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-07-11 23:46:17.117288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.117562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.117612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-07-11 23:46:17.117885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.118163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.118192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-07-11 23:46:17.118464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.118726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.118773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-07-11 23:46:17.119030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.119274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.119303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-07-11 23:46:17.119543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-07-11 23:46:17.119786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.119833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.120096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.120306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.120335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.120583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.120813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.120863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.121119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.121367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.121395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.121632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.121895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.121944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.122156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.122359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.122386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.122676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.122982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.123030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.123316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.123542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.123592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.123873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.124053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.124080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.124284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.124548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.124602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.124817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.125089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.125117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.125372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.125685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.125733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.125948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.126154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.126187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.126419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.126689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.126739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.127033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.127302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.127331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.127556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.127870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.127922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.128206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.128450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.128477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.128742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.129017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.129066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.129325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.129547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.129596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.129813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.130073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.130100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.130368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.130598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.130648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.130904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.131193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.131222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.131509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.131811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.131861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.132094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.132321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.132351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.132547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.132766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.132817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.133073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.133383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.133412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.133643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.133887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.133937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.134182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.134434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.134460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.134700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.134916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.134963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.135194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.135454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.135481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.135694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.135957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.136005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.136236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.136459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.136508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.136711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.136972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.137024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.137314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.137631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.137678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.137933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.138187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.138215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.138482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.138774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.138821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.139053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.139233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.139261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.139491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.139771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.139836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.140104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.140364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.140393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.140647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.140926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.140974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.141253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.141498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.141525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.141851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.142198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-07-11 23:46:17.142226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-07-11 23:46:17.142577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.142883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.142933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.143168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.143386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.143413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.143628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.143873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.143924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.144132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.144334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.144364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.144576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.144868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.144920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.145183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.145412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.145439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.145665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.145927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.145977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.146222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.146452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.146480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.146755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.147039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.147088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.147416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.147740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.147792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.148073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.148289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.148319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.148574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.148831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.148878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.149128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.149389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.149416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.149658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.149880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.149928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.150181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.150423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.150451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.150706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.151028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.151080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.151336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.151609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.151657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.151889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.152118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.152157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.152425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.152651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.152700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.152980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.153284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.153312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.153604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.153918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.153969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.154240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.154465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.154497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.154710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.154959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.155008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.155296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.155563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.155611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.155848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.156064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.156091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.156287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.156540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.156590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.156867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.157076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.157103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.157348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.157586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.157635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.157842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.158132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.158168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.158315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.158599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.158653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.158910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.159113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.159148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.159405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.159637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.159685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.159927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.160213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.160242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.160417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.160647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.160698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.160925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.161222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.161250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.161479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.161672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.161718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.161966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.162219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.162247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.162503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.162749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.162800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.163084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.163318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.163346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.163594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.163913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.163963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-07-11 23:46:17.164213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.164438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-07-11 23:46:17.164466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.164741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.165024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.165073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.165323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.165561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.165610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.165912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.166134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.166168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.166416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.166614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.166661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.166836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.167095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.167122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.167382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.167617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.167666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.167922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.168157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.168188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.168520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.168795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.168845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.169107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.169438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.169497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.169761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.170015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.170071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.170312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.170577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.170630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.170911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.171188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.171215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.171448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.171707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.171753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.171998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.172243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.172272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.172457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.172725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.172772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.173094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.173368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.173396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.173690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.174084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.174132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.174349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.174574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.174622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.174862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.175112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.175147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.175393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.175660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.175709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.175923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.176168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.176198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.176512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.176808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.176861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.177167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.177421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.177449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.177732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.177990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.178040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.178326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.178600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.178648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.178882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.179104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.179131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.179385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.179641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.179699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.179936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.180186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.180216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.180460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.180704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.180747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.180989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.181232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.181259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.181534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.181823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.181873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.182115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.182336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.182369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.182611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.182913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.182962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.183199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.183435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.183462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.183771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.184033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.184081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.184318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.184598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.184657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.184930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.185208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.185236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.185587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.185915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-07-11 23:46:17.185946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-07-11 23:46:17.186157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.186356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.186383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.186619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.186871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.186921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.187080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.187264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.187292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.187442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.187622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.187683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.187887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.188105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.188132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.188373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.188581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.188631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.188882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.189065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.189092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.189341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.189566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.189617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.189821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.190025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.190053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.190237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.190433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.190489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.190689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.190898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.190948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.191174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.191399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.191444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.191696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.191935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.191984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.192167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.192364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.192391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.192603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.192859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.192910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.193097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.193331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.193359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.193575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.193847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.193910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.194066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.194258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.194286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.194484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.194749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.194797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.195019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.195230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.195258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.195456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.195675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.195725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.195981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.196255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.196291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.196577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.196834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.196883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.197067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.197265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.197311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.197520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.197769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.197829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.198041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.198234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.198285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.198535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.198785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.198835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.199047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.199235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.199263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.199535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.199815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.199864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.200037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.200270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.200298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.200533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.200788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.200835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.201024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.201242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.201287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.201525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.201780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.201829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.202047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.202257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.202318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.202533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.202819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.202881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.203115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.203314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.203342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.203575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.203860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.203916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.204162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.204355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.204382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.204609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.204826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.204873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-07-11 23:46:17.205062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.205295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-07-11 23:46:17.205323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.205540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.205816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.205865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.206060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.206260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.206289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.206538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.206788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.206839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.207058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.207250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.207277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.207476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.207679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.207728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.207896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.208102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.208129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.208335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.208576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.208628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.208858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.209081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.209107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.209353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.209619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.209666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.209877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.210128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.210171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.210396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.210610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.210658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.210822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.211051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.211101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.211337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.211563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.211616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.211833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.212097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.212124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.212313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.212534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.212589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.212813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.212988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.213037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.213260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.213494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.213545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.213733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.213944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.213991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.214227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.214463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.214522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.214729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.214970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.215017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.215281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.215517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.215565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.215777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.216006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.216034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.216245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.216448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.216508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.216740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.216932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.216980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.217173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.217398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.217442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.217649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.217890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.217940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.218178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.218353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.218399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.218592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.218817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.218868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.219055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.219240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.219268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.219460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.219668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.219719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.219977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.220229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.220276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.220486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.220649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.220676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.220919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.221136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.221173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.221332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.221569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.221612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-07-11 23:46:17.221846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.222032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-07-11 23:46:17.222059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.564 [2024-07-11 23:46:17.222230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.222386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.222434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.564 qpair failed and we were unable to recover it. 00:32:56.564 [2024-07-11 23:46:17.222630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.222845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.222897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.564 qpair failed and we were unable to recover it. 00:32:56.564 [2024-07-11 23:46:17.223127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.223334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.223362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.564 qpair failed and we were unable to recover it. 00:32:56.564 [2024-07-11 23:46:17.223574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.223776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.223824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.564 qpair failed and we were unable to recover it. 00:32:56.564 [2024-07-11 23:46:17.224022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.224190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.224218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.564 qpair failed and we were unable to recover it. 00:32:56.564 [2024-07-11 23:46:17.224447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.224602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.224629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.564 qpair failed and we were unable to recover it. 00:32:56.564 [2024-07-11 23:46:17.224842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.225066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.225093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.564 qpair failed and we were unable to recover it. 00:32:56.564 [2024-07-11 23:46:17.225337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.225591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.225642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.564 qpair failed and we were unable to recover it. 00:32:56.564 [2024-07-11 23:46:17.225802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.226003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.226029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.564 qpair failed and we were unable to recover it. 00:32:56.564 [2024-07-11 23:46:17.226275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.226502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.226551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.564 qpair failed and we were unable to recover it. 00:32:56.564 [2024-07-11 23:46:17.226788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.227006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.564 [2024-07-11 23:46:17.227034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.564 qpair failed and we were unable to recover it. 00:32:56.564 [2024-07-11 23:46:17.227218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.227486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.227535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.227750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.228001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.228050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.228258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.228534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.228584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.228757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.228996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.229022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.229205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.229463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.229512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.229754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.229992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.230041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.230277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.230511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.230562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.230745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.230996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.231023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.231213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.231439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.231496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.231700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.231919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.231968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.232174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.232375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.232421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.232602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.232812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.232865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.233060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.233265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.233311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.233497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.233713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.233762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.233987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.234206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.234256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.234432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.234657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.234706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.234928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.235161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.235193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.235368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.235574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.235625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.235877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.236128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.236167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.236389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.236589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.236645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.236845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.237034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.237060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.237228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.237412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.237498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.237675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.237919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.237970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.238175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.238374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.238418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.238658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.238918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.238967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.239154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.239321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.239348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.239544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.239759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.239807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.240035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.240259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.240289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.240472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.240708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.240759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.240985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.241208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.565 [2024-07-11 23:46:17.241236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.565 qpair failed and we were unable to recover it. 00:32:56.565 [2024-07-11 23:46:17.242398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.242622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.242672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.242863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.243081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.243109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.243329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.243602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.243655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.243830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.244039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.244066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.244258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.244506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.244556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.244767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.244957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.245007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.245227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.245494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.245542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.245791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.245982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.246009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.246211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.246418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.246461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.246661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.246927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.246978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.247229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.247484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.247538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.247719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.247951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.248000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.248253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.248521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.248570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.248807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.249035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.249063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.249294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.249566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.249616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.249816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.250009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.250036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.250283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.250515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.250564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.250843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.251085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.251114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.251319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.251555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.251605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.251800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.251981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.252007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.252258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.252519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.252569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.252760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.252997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.253024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.253256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.253511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.253560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.253802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.254078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.254104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.254339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.254612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.254660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.255008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.255265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.255293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.255526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.255801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.255855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.256025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.256247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.256277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.256473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.256704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.256754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.257000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.257263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.257310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.566 qpair failed and we were unable to recover it. 00:32:56.566 [2024-07-11 23:46:17.257519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.566 [2024-07-11 23:46:17.257800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.257860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.258046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.258213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.258259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.258476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.258708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.258756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.258982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.259231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.259280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.259548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.259836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.259884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.260121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.260347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.260376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.260625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.260937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.260995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.261264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.261504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.261551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.261878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.262159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.262187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.262352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.262521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.262580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.262784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.263005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.263056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.263290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.263464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.263513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.263744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.264016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.264043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.264245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.264445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.264500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.264740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.264999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.265048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.265248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.265480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.265531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.265797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.266182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.266213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.266408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.266655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.266703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.266950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.267188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.267216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.267471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.267703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.267753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.268067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.268325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.268360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.268688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.269034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.269082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.269312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.269532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.269582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.269846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.270106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.270133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.270343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.270580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.270628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.270852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.271057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.271084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.271278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.271528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.271578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.271822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.272050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.272080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.272268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.272507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.272534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.272731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.272935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.272962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.567 qpair failed and we were unable to recover it. 00:32:56.567 [2024-07-11 23:46:17.273162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.273371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.567 [2024-07-11 23:46:17.273422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.273675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.273918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.273964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.274261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.274445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.274502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.274695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.274908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.274962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.275241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.275456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.275504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.275731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.275958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.276006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.276221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.276431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.276492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.276719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.276957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.277008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.277292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.277529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.277579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.277853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.278132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.278168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.278400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.278652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.278700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.278982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.279221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.279249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.279413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.279660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.279709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.280074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.280333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.280362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.280625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.280924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.280971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.281214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.281431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.281458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.281706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.281966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.282014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.282234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.282418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.282474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.282674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.282913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.282962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.283206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.283438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.283498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.283739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.283977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.284025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.284265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.284540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.284590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.284773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.285078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.285105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.285295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.285551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.285601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.285843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.286094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.286122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.286373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.286593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.286644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.286888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.287150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.287178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.568 qpair failed and we were unable to recover it. 00:32:56.568 [2024-07-11 23:46:17.287367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.287618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.568 [2024-07-11 23:46:17.287665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.287882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.288110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.288137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.288365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.288588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.288636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.288889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.289120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.289161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.289362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.289607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.289656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.289891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.290152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.290181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.290336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.290574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.290618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.290874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.291147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.291184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.291395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.291590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.291643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.291938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.292205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.292235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.292455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.292710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.292760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.292986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.293229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.293257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.293447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.293702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.293753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.293972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.294220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.294248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.294467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.294707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.294759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.295039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.295280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.295326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.295592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.295823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.295871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.296113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.296319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.296359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.296561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.296800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.296847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.297085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.297300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.297329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.297545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.297793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.297841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.298111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.298354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.298382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.298589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.298858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.298906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.299097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.299320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.299348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.299559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.299822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.299872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.300113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.300331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.300360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.300575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.300825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.300874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.301118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.301339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.301367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.301554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.301808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.301859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.569 qpair failed and we were unable to recover it. 00:32:56.569 [2024-07-11 23:46:17.302069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.569 [2024-07-11 23:46:17.302266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.302294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.302504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.302782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.302838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.303067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.303281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.303310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.303577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.303823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.303872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.304120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.304348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.304377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.304618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.304898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.304945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.305127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.305317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.305345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.305581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.305809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.305860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.306093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.306339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.306368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.306685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.306924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.306972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.307244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.307425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.307452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.307684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.307861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.307911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.308136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.308363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.308391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.308591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.308899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.308954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.309191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.309359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.309387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.309588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.309818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.309867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.310088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.310340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.310369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.310614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.310870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.310917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.311190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.311407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.311434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.311633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.311874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.311922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.312136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.312357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.312395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.312608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.312855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.312902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.313078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.313250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.313279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.313471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.313680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.313728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.314018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.314270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.314299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.314544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.314807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.314857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.315089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.315275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.315303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.315495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.315698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.315745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.315953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.316154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.316182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.316352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.316601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.316650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.316916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.317219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.317248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.570 qpair failed and we were unable to recover it. 00:32:56.570 [2024-07-11 23:46:17.317506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.570 [2024-07-11 23:46:17.317738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.317788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.318029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.318294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.318323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.318563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.318799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.318848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.319056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.319245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.319273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.319484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.319662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.319713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.319910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.320146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.320181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.320399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.320616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.320667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.320908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.321111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.321148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.321344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.321545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.321594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.321833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.322075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.322103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.322291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.322514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.322561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.322859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.323109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.323137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.323334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.323530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.323573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.323751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.323993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.324051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.324251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.324434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.324495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.324720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.325004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.325053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.325271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.325481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.325530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.325802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.326021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.326048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.326268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.326559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.326609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.326854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.327096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.327123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.327335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.327539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.327589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.327822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.328120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.328156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.328322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.328557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.328602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.328804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.329043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.329069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.329322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.329579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.329606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.329897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.330197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.330227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.330436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.330696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.330744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.330999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.331220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.331247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.331432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.331596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.331643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.331820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.332034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.332061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.571 qpair failed and we were unable to recover it. 00:32:56.571 [2024-07-11 23:46:17.332270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.332480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.571 [2024-07-11 23:46:17.332530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.332736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.332968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.333016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.333187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.333363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.333411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.333623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.333832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.333883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.334044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.334207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.334254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.334488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.334712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.334759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.334974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.335180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.335208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.335388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.335627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.335678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.335902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.336107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.336134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.336325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.336509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.336560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.336769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.336986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.337043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.337225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.337457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.337520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.337731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.338058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.338107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.338344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.338549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.338598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.338784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.338966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.338993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.339203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.339400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.339445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.339607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.339870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.339924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.340122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.340320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.340367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.340559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.340778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.340828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.341012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.341216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.341266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.341485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.341746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.341794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.341999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.342188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.342239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.342410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.342614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.342663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.342868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.343107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.343134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.343317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.343533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.343583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.343788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.343972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.343999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.344227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.344446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.344506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.344741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.344969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.344996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.345203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.345417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.345475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.345663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.345892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.345940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.346163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.346325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.346372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.572 qpair failed and we were unable to recover it. 00:32:56.572 [2024-07-11 23:46:17.346569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.572 [2024-07-11 23:46:17.346844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.346893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.347117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.347319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.347367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.347627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.347830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.347881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.348067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.348261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.348288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.348463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.348712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.348765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.348969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.349224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.349260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.349536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.349788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.349838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.350028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.350204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.350254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.350458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.350727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.350778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.351000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.351203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.351251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.351404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.351672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.351731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.351963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.352147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.352174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.352326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.352572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.352625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.352852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.353081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.353109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.353323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.353538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.353587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.353753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.353977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.354031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.354215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.354389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.354432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.354630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.354879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.354928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.355083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.355266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.355300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.355554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.355776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.355824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.356007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.356210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.356267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.356444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.356680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.356728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.356991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.357203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.357263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.357473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.357690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.357740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.357942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.358118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.358152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.358333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.358585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.358636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.573 [2024-07-11 23:46:17.358864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.359080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.573 [2024-07-11 23:46:17.359107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.573 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.359319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.359549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.359596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.359829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.360019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.360045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.360235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.360416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.360475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.360692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.360884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.360932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.361134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.361350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.361378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.361640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.361879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.361921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.362112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.362270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.362298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.362488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.362768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.362833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.363034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.363280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.363308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.363513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.363763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.363819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.364019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.364255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.364284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.364513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.364778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.364826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.365173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.365389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.365417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.365741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.366060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.366109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.366345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.366635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.366688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.367013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.367307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.367334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.367674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.368058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.368106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.368310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.368553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.368599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.368869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.369075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.369102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.369327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.369544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.369596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.369794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.370064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.370112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.370349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.370512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.370561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.370814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.371018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.371045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.371274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.371479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.371531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.371743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.371966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.372015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.372215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.372442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.372494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.372729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.372996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.373048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.373284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.373532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.373584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.373828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.374027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.374053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.374267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.374451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.374514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.574 [2024-07-11 23:46:17.374797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.375119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.574 [2024-07-11 23:46:17.375156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.574 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.375332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.375585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.375635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.375911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.376100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.376127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.376327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.376546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.376594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.376843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.377063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.377090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.377251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.377435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.377489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.377730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.377989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.378044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.378224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.378380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.378407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.378603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.378831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.378878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.379086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.379273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.379301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.379545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.379762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.379813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.380025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.380228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.380275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.380491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.380751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.380801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.381148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.381347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.381375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.381619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.381835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.381885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.382108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.382332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.382360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.382576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.382840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.382887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.383115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.383299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.383327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.383619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.383856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.383905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.384113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.384347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.384375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.384618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.384903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.384952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.385201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.385420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.385447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.385682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.385897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.385945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.386225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.386434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.386461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.386726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.387090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.387137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.387340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.387605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.387653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.387853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.388040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.388067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.388279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.388464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.388522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.388722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.388956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.389004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.389264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.389615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.389665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.389935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.390232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.390260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.575 qpair failed and we were unable to recover it. 00:32:56.575 [2024-07-11 23:46:17.390493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.575 [2024-07-11 23:46:17.390729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.390776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.390996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.391233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.391260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.391488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.391748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.391796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.392046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.392237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.392265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.392454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.392709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.392757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.393045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.393236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.393265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.393447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.393689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.393736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.393996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.394321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.394351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.394630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.395004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.395052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.395309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.395547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.395595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.395851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.396067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.396094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.396285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.396560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.396617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.396880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.397187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.397216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.397391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.397658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.397708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.398002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.398265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.398292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.398513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.398694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.398742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.398978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.399265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.399313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.399667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.400007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.400055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.400280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.400501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.400551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.400781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.401116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.401197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.401369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.401594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.401643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.401881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.402166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.402195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.402383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.402639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.402688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.403068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.403295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.403323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.403708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.404080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.404126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.404372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.404723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.404773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.405014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.405242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.405271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.405448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.405637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.405685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.405848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.406072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.406100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.406284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.406451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.406502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.406812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.407063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.576 [2024-07-11 23:46:17.407090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.576 qpair failed and we were unable to recover it. 00:32:56.576 [2024-07-11 23:46:17.407276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.407440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.407467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.407695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.407933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.407984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.408197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.408389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.408440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.408642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.408900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.408949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.409185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.409384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.409428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.409632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.409864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.409913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.410199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.410412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.410463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.410750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.411015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.411066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.411259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.411519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.411569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.411856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.412172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.412200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.412387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.412603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.412653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.412963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.413223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.413252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.413424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.413626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.413676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.413908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.414165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.414193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.414367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.414585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.414632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.414859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.415086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.415112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.415311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.415542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.415592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.415825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.416072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.416099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.416323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.416515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.416567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.416829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.417035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.417062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.417309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.417562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.417611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.417908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.418182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.418210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.418386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.418648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.418703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.418967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.419271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.419299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.419540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.419791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.419841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.420108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.420335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.420362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.577 qpair failed and we were unable to recover it. 00:32:56.577 [2024-07-11 23:46:17.420631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.577 [2024-07-11 23:46:17.420868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.420918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.421225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.421472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.421500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.421747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.421966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.422016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.422199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.422421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.422464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.422700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.422969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.423017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.423298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.423600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.423649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.423869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.424073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.424100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.424369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.424609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.424652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.424892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.425086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.425113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.425386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.425669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.425720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.425983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.426196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.426224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.426445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.426631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.426683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.426897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.427170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.427198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.427396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.427615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.427668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.427922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.428152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.428180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.428386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.428613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.428661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.428890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.429133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.429175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.429380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.429599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.429648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.429893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.430077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.430103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.430350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.430596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.430653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.430939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.431215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.431243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.431476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.431731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.431778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.431981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.432212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.432239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.432436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.432655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.432704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.432941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.433160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.433190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.433393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.433610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.433661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.433863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.434059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.434086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.434259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.434509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.434556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.434835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.435045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.435073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.435247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.435438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.435493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.435744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.435972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.578 [2024-07-11 23:46:17.436021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.578 qpair failed and we were unable to recover it. 00:32:56.578 [2024-07-11 23:46:17.436238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.436473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.436529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.436764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.437003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.437030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.437258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.437502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.437552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.437782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.437961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.437988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.438248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.438493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.438542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.438785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.439025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.439052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.439270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.439553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.439618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.439823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.439983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.440009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.440266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.440539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.440586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.440795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.441012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.441039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.441242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.441456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.441506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.441731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.441943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.442003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.442236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.442539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.442594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.442806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.443024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.443052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.443249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.443473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.443522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.443687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.443926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.443975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.444212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.444449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.444495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.444737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.444977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.445027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.445257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.445443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.445498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.445698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.445935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.445983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.446212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.446466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.446515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.446745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.446978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.447025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.447255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.447474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.447522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.447717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.447934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.447982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.448187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.448389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.448435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.448681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.448928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.448978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.449206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.449404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.449431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.449630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.449837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.449886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.450089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.450272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.450302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.450487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.450725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.579 [2024-07-11 23:46:17.450777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.579 qpair failed and we were unable to recover it. 00:32:56.579 [2024-07-11 23:46:17.450984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.451212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.451261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.451464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.451678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.451728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.451971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.452218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.452246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.452460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.452697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.452754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.452949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.453122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.453157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.453339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.453597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.453646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.453907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.454222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.454280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.454489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.454725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.454774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.454981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.455195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.455223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.455448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.455707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.455757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.456004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.456267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.456316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.456522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.456791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.456842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.457046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.457213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.457242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.457479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.457720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.457782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.457979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.458214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.458242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.458433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.458665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.458714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.458920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.459168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.459196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.459418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.459626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.459676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.459911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.460156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.460184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.460380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.460587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.460636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.460885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.461131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.461174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.461375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.461621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.461668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.461890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.462146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.462174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.462399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.462630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.462679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.462880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.463132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.463178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.463381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.463594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.463642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.463863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.464087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.464115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.464345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.464590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.464640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.464843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.465061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.465088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.465288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.465497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.465547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.580 [2024-07-11 23:46:17.465788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.466028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.580 [2024-07-11 23:46:17.466081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.580 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.466256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.466492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.466549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.466816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.467089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.467116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.467299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.467482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.467531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.467771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.467984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.468032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.468242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.468579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.468629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.468918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.469177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.469207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.469491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.469727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.469776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.470016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.470235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.470264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.470528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.470811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.470859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.471133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.471383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.471410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.471647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.471851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.471901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.472128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.472365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.472392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.472645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.472931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.472983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.473269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.473509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.473553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.473807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.474046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.474099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.474484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.474803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.474852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.475092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.475308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.475337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.475643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.475904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.475953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.476185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.476357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.476385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.476598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.476828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.476875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.477064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.477264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.477294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.477549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.477787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.477835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.478043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.478266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.478294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.478506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.478763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.478814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.479102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.479348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.479376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.479601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.479848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.479898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.480112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.480316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.480352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.480555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.480811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.581 [2024-07-11 23:46:17.480861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.581 qpair failed and we were unable to recover it. 00:32:56.581 [2024-07-11 23:46:17.481113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.481370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.481399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.481710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.481972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.482020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.482226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.482449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.482499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.482719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.483048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.483095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.483311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.483533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.483580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.483843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.484088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.484120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.484491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.484776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.484826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.485121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.485419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.485448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.485705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.485951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.486000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.486257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.486517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.486568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.486778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.487034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.487083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.487319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.487538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.487586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.487800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.488025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.488052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.488287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.488537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.488596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.488851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.489055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.489082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.489248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.489464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.489515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.489764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.490009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.490057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.490325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.490607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.490652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.490907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.491135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.491170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.491331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.491565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.491617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.491863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.492104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.492132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.492520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.492802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.492853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.493155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.493443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.493473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.493727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.493970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.494015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.494220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.494415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.494465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.494717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.494997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.495047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.495392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.495744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.495794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.496119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.496375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.496403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.496685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.496955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.497004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.582 qpair failed and we were unable to recover it. 00:32:56.582 [2024-07-11 23:46:17.497220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.582 [2024-07-11 23:46:17.497415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.497463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.497670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.497897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.497946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.498121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.498336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.498364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.498573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.498834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.498883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.499131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.499412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.499440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.499716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.499963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.500013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.500249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.500502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.500530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.500777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.501008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.501058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.501375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.501597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.501648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.501842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.502009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.502037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.502293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.502580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.502629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.502887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.503133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.503167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.503429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.503685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.503735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.503986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.504222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.504252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.504495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.504800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.504851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.505130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.505397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.505424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.505652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.505904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.505955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.506208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.506431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.506469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.506776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.507045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.507096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.507365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.507611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.507660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.507952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.508203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.583 [2024-07-11 23:46:17.508232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.583 qpair failed and we were unable to recover it. 00:32:56.583 [2024-07-11 23:46:17.508440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.508707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.508760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.852 qpair failed and we were unable to recover it. 00:32:56.852 [2024-07-11 23:46:17.509030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.509298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.509327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.852 qpair failed and we were unable to recover it. 00:32:56.852 [2024-07-11 23:46:17.509544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.509756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.509804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.852 qpair failed and we were unable to recover it. 00:32:56.852 [2024-07-11 23:46:17.510064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.510324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.510352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.852 qpair failed and we were unable to recover it. 00:32:56.852 [2024-07-11 23:46:17.510586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.510789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.510817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.852 qpair failed and we were unable to recover it. 00:32:56.852 [2024-07-11 23:46:17.511056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.511262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.511290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.852 qpair failed and we were unable to recover it. 00:32:56.852 [2024-07-11 23:46:17.511653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.512034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.512086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.852 qpair failed and we were unable to recover it. 00:32:56.852 [2024-07-11 23:46:17.512357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.512601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.852 [2024-07-11 23:46:17.512648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.512863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.513046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.513073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.513355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.513658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.513708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.513913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.514183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.514222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.514414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.514650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.514700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.514950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.515191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.515219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.515434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.515661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.515709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.516002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.516282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.516311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.516571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.516817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.516866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.517111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.517330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.517358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.517641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.517936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.517987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.518269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.518500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.518548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.518792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.519039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.519066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.519287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.519536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.519583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.519831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.520037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.520065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.520277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.520529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.520582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.520869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.521074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.521101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.521345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.521644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.521697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.521941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.522247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.522275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.522523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.522804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.522858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.523135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.523355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.523382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.523618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.523894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.523945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.524195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.524387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.524414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.524664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.524949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.524997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.525243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.525418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.525462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.525724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.525984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.526033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.526311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.526548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.526596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.526793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.527006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.527034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.527247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.527523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.527570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.527785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.528019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.528070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-07-11 23:46:17.528270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.528488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.853 [2024-07-11 23:46:17.528536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.528697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.528889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.528949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.529127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.529323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.529352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.529545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.529773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.529823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.530030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.530230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.530259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.530519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.530809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.530858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.531110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.531315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.531343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.531547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.531783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.531832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.532032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.532277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.532305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.532530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.532820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.532876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.533115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.533366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.533396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.533633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.533896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.533951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.534169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.534361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.534389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.534627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.534924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.534990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.535298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.535568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.535624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.535894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.536106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.536133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.536356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.536565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.536621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.536845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.537081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.537108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.537283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.537490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.537540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.537818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.537990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.538040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.538274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.538542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.538595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.538845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.539165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.539193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.539463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.539724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.539772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.540015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.540300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.540328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.540613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.540870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.540919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.541235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.541446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.541497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.541784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.542100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.542164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.542397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.542624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.542671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.542929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.543127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.543175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.543347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.543604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.543653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.543945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.544192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.544220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-07-11 23:46:17.544457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.854 [2024-07-11 23:46:17.544698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.544747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.545018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.545255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.545285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.545502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.545798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.545852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.546091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.546298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.546326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.546580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.546842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.546870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.547221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.547427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.547454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.547657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.547927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.547979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.548229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.548430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.548457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.548696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.548990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.549039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.549277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.549508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.549557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.549814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.550067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.550094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.550319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.550592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.550642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.550924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.551117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.551152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.551399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.551679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.551739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.551988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.552263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.552291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.552471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.552750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.552802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.553059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.553282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.553312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.553477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.553697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.553746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.553991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.554301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.554345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.554641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.554921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.554968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.555210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.555402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.555447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.555750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.556028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.556078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.556271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.556535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.556589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.556854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.557161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.557190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.557409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.557634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.557682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.557926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.558202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.558230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.558443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.558678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.558728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.559001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.559262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.559290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.559550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.559790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.559837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.560062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.560275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.560303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-07-11 23:46:17.560542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.855 [2024-07-11 23:46:17.560820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.560869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.561121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.561331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.561359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.561579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.561890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.561942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.562192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.562403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.562430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.562709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.562992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.563040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.563303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.563527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.563579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.563832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.564057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.564084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.564299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.564522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.564573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.564803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.565054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.565081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.565337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.565629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.565677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.565882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.566028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.566060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.566245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.566467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.566518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.566722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.567003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.567053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.567353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.567609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.567658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.567903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.568100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.568128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.568368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.568606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.568655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.568913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.569194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.569224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.569470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.569661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.569713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.569964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.570171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.570200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.570520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.570744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.570795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.571045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.571297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.571332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.571608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.571896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.571943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.572160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.572367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.572395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.572708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.572980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.573028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.573272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.573535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.573584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.573858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.574156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.574184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.574415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.574667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.574713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.574983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.575259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.575287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.575533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.575722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.575771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.575995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.576293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.576344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.856 qpair failed and we were unable to recover it. 00:32:56.856 [2024-07-11 23:46:17.576613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.576903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.856 [2024-07-11 23:46:17.576950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.577243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.577496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.577547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.577819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.578024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.578074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.578271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.578483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.578535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.578813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.578967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.578995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.579186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.579398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.579456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.579789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.580084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.580112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.580366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.580576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.580626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.580976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.581175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.581205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.581474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.581767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.581824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.582043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.582267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.582295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.582539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.582836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.582886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.583161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.583369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.583396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.583612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.583916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.583973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.584201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.584441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.584468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.584686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.584926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.584973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.585223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.585554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.585605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.585787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.586052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.586102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.586403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.586655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.586709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.586993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.587303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.587330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.587543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.587762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.587811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.588065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.588244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.588272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.588491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.588787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.588845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.589096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.589314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.589343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.589561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.589774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.589823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.590157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.590403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.590431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.857 qpair failed and we were unable to recover it. 00:32:56.857 [2024-07-11 23:46:17.590647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.857 [2024-07-11 23:46:17.590901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.590950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.591257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.591619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.591681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.591950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.592132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.592171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.592435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.592701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.592755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.592997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.593222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.593249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.593486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.593734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.593785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.594018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.594296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.594326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.594579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.594967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.595013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.595259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.595519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.595567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.595821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.596016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.596054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.596265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.596439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.596490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.596749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.597190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.597218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.597463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.597740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.597791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.598079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.598317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.598347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.598585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.598855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.598911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.599167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.599530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.599579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.599836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.600104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.600162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.600453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.600718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.600766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.600985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.601232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.601260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.601590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.601886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.601939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.602172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.602421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.602449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.602709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.603003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.603051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.603245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.603502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.603553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.603801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.604052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.604079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.604310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.604622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.604674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.604944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.605226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.858 [2024-07-11 23:46:17.605255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.858 qpair failed and we were unable to recover it. 00:32:56.858 [2024-07-11 23:46:17.605530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.605842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.605891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.606116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.606375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.606404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.606653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.606937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.606997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.607260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.607491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.607540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.607746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.607963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.608012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.608269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.608510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.608557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.608842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.609130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.609167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.609403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.609609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.609658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.609892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.610133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.610177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.610556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.610881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.610930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.611194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.611404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.611432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.611716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.612021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.612049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.612342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.612605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.612651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.612869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.613093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.613121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.613326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.613563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.613612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.613832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.614057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.614084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.614237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.614457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.614504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.614752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.615014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.615065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.615310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.615516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.615564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.615829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.616102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.616129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.616340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.616598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.616655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.616948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.617224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.617252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.617407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.617646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.617696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.617945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.618158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.618191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.618366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.618599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.618649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.618901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.619204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.619232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.619442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.619685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.619735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.620034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.620273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.620301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.620562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.620829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.620881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.621112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.621320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.859 [2024-07-11 23:46:17.621348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.859 qpair failed and we were unable to recover it. 00:32:56.859 [2024-07-11 23:46:17.621609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.621873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.621924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.622204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.622433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.622461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.622692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.622935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.622984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.623222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.623475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.623526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.623793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.624085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.624112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.624320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.624548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.624598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.624848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.625019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.625047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.625229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.625450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.625500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.625763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.626012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.626061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.626319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.626562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.626619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.626891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.627135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.627190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.627416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.627594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.627641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.627929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.628201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.628230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.628465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.628690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.628737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.629013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.629223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.629251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.629432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.629682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.629731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.629991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.630264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.630293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.630530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.630817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.630868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.631076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.631284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.631312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.631617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.631843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.631895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.632175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.632382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.632409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.632657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.632960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.633015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.633298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.633528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.633576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.633878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.634185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.634215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.634470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.634757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.634806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.635081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.635365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.635393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.635607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.635860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.635908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.636184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.636427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.636454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.636677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.636960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.637009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.637282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.637477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.637524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.860 qpair failed and we were unable to recover it. 00:32:56.860 [2024-07-11 23:46:17.637773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.860 [2024-07-11 23:46:17.638044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.638092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.638312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.638574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.638625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.638874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.639180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.639208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.639453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.639735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.639783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.640064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.640210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.640238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.640475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.640768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.640818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.641050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.641319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.641347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.641580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.641871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.641931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.642135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.642362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.642389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.642621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.642863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.642911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.643192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.643425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.643452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.643708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.644005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.644055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.644313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.644521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.644571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.644871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.645177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.645205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.645449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.645808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.645859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.646128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.646387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.646415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.646646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.646849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.646898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.647164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.647363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.647390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.647648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.647935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.647985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.648257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.648482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.648532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.648702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.648912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.648963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.649245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.649545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.649594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.649836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.650106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.650133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.650383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.650636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.650686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.650898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.651099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.651126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.651354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.651610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.861 [2024-07-11 23:46:17.651667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.861 qpair failed and we were unable to recover it. 00:32:56.861 [2024-07-11 23:46:17.651935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.652189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.652217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.652459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.652716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.652763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.653038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.653257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.653284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.653543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.653821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.653868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.654154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.654364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.654391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.654693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.655027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.655080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.655333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.655586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.655635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.655925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.656190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.656218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.656459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.656698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.656748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.656997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.657272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.657300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.657503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.657763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.657811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.658084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.658358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.658387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.658606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.658846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.658895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.659161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.659390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.659418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.659619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.659824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.659874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.660074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.660306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.660339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.660562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.660773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.660822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.661075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.661246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.661273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.661522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.661738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.661789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.662102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.662350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.662378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.662662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.662965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.663013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.663276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.663498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.663548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.663774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.663985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.664036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.664283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.664528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.664575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.664799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.665036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.665063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.665226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.665453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.665501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.665784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.665977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.666027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.666270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.666584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.666639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.666898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.667119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.667159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.667483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.667790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.667844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.862 qpair failed and we were unable to recover it. 00:32:56.862 [2024-07-11 23:46:17.668102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.862 [2024-07-11 23:46:17.668319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.668347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.668555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.668828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.668876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.669128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.669501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.669545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.669849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.670159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.670193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.670417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.670664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.670720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.671002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.671276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.671305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.671556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.671794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.671841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.672098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.672353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.672380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.672616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.672827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.672877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.673113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.673412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.673442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.673701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.673910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.673959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.674176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.674390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.674417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.674626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.674866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.674916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.675211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.675427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.675455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.675676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.675908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.675957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.676182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.676351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.676379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.676614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.676861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.676910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.677152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.677536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.677579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.677863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.678131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.678178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.678474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.678796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.678844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.679223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.679497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.679553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.679806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.680109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.680177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.680482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.680807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.680855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.681072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.681358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.681403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.681655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.681953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.682005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.682288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.682554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.682605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.682868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.683146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.683174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.683401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.683639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.683689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.683931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.684148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.684176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.684425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.684687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.684741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.685065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.685476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.685520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.863 qpair failed and we were unable to recover it. 00:32:56.863 [2024-07-11 23:46:17.685821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.863 [2024-07-11 23:46:17.686113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.686181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.686411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.686639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.686689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.686909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.687163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.687191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.687374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.687601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.687651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.687939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.688258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.688286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.688517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.688736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.688808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.689029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.689252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.689280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.689478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.689726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.689776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.690019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.690249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.690278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.690499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.690720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.690768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.691023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.691250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.691278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.691523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.691772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.691822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.692031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.692263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.692292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.692504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.692715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.692767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.693023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.693287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.693333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.693576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.693817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.693864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.694072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.694275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.694304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.694532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.694717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.694774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.694989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.695241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.695288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.695503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.695773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.695823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.696013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.696245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.696291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.696465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.696683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.696732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.696987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.697221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.697268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.697549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.697828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.697875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.698154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.698352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.698380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.698582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.698803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.698853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.699052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.699252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.699280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.699490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.699712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.699764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.699950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.700205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.700234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.700453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.700641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.700691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.864 qpair failed and we were unable to recover it. 00:32:56.864 [2024-07-11 23:46:17.700936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.864 [2024-07-11 23:46:17.701184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.701212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.701404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.701634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.701696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.701918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.702168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.702198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.702441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.702660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.702710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.702897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.703119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.703155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.703357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.703579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.703628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.703900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.704152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.704181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.704368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.704576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.704624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.704879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.705125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.705162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.705370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.705614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.705662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.705877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.706123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.706167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.706393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.706630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.706683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.706922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.707119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.707165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.707339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.707582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.707632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.707865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.708064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.708091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.708322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.708540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.708590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.708832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.709154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.709182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.709386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.709597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.709645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.709866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.710114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.710152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.710364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.710570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.710618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.710848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.711099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.711126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.711384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.711655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.711705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.711919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.712165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.712194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.712381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.712571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.712621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.712826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.713040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.713067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.713230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.713450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.713499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.713735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.713989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.714042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.714302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.714576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.714627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.865 qpair failed and we were unable to recover it. 00:32:56.865 [2024-07-11 23:46:17.714914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.715231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.865 [2024-07-11 23:46:17.715259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.715460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.715703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.715751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.715975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.716216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.716245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.716474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.716677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.716726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.716938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.717152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.717181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.717410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.717611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.717658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.717868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.718120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.718158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.718339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.718533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.718581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.718830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.719047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.719074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.719278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.719516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.719566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.719786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.720040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.720089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.720361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.720625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.720683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.720974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.721206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.721234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.721466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.721668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.721717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.721947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.722171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.722210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.722431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.722646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.722694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.722905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.723117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.723157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.723414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.723622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.723669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.723912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.724166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.724195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.724404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.724674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.724725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.725017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.725239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.725267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.725458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.725729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.725779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.725984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.726242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.726271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.726463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.726672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.726723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.726967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.727236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.727292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.727552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.727767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.727817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.728063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.728268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.728317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.728595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.728810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.728860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.729108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.729320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.729349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.866 qpair failed and we were unable to recover it. 00:32:56.866 [2024-07-11 23:46:17.729553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.866 [2024-07-11 23:46:17.729821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.729869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.730078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.730281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.730310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.730490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.730676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.730725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.730968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.731223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.731258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.731511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.731748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.731799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.732024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.732269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.732315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.732530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.732792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.732841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.733102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.733322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.733350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.733618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.733882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.733932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.734135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.734351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.734379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.734589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.734839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.734887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.735110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.735290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.735319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.735608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.735824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.735872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.736078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.736307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.736337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.736571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.736740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.736789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.737034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.737234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.737262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.737490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.737748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.737797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.738016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.738262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.738290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.738458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.738662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.738710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.738911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.739178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.739207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.739423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.739693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.739746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.739992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.740219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.740249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.740452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.740652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.740701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.740874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.741078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.741105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.741320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.741539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.741587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.741885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.742159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.742187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.742422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.742645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.742694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.742906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.743131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.743167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.743394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.743652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.743702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.743990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.744249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.744279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.867 qpair failed and we were unable to recover it. 00:32:56.867 [2024-07-11 23:46:17.744485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.867 [2024-07-11 23:46:17.744701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.744749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.744986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.745161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.745190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.745427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.745688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.745738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.745970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.746219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.746247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.746444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.746682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.746734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.746950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.747153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.747181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.747379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.747613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.747668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.747903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.748123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.748165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.748409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.748602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.748656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.748859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.749094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.749122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.749344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.749569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.749619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.749845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.750037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.750064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.750263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.750449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.750503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.750770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.751005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.751053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.751254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.751473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.751523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.751763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.751986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.752013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.752242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.752432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.752490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.752683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.752953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.753002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.753240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.753469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.753519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.753699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.753934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.753993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.754194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.754401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.754444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.754654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.754926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.754973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.755172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.755408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.755465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.755709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.755954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.756004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.756168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.756345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.756391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.756625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.756839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.756891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.757127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.757401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.757465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.757676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.757908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.757962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.758201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.758423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.758465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.758766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.759083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.759132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.759432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.759662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.868 [2024-07-11 23:46:17.759711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.868 qpair failed and we were unable to recover it. 00:32:56.868 [2024-07-11 23:46:17.759945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.760208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.760238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.760447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.760666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.760714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.760952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.761200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.761228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.761429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.761638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.761686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.761910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.762125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.762162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.762389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.762626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.762677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.762918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.763169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.763197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.763385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.763589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.763641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.763876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.764154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.764187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.764405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.764694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.764748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.765039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.765323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.765355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.765610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.765911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.765960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.766256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.766503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.766531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.766832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.767063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.767110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.767319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.767583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.767636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.767885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.768181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.768210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.768581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.768907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.768957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.769215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.769430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.769457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.769695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.769976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.770026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.770319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.770552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.770602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.770872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.771153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.771187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.771443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.771703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.771762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.772062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.772322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.772352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.772662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.772942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.772989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.773231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.773424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.773471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.773715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.773936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.773984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.774279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.774489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.774537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.774770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.774969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.774997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.775252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.775540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.775590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.775851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.776030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.776057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.869 [2024-07-11 23:46:17.776263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.776590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.869 [2024-07-11 23:46:17.776638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.869 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.776856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.777090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.777117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.777504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.777824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.777877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.778148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.778383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.778411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.778633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.778904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.778959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.779171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.779635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.779678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.779974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.780275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.780306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.780632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.780935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.780983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.781280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.781460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.781509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.781704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.781952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.782001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.782252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.782547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.782598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.782871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.783102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.783129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.783380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.783637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.783687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.783969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.784195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.784224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.784486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.784780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.784829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.785074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.785347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.785375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.785673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.785957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.786004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.786231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.786477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.786525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.786782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.787045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.787100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.787342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.787542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.787570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.787853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.788130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.788173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.788439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.788714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.788765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.789012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.789263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.789291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.789541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.789783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.789842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.790077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.790279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.790307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.790607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.790874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.790924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.870 [2024-07-11 23:46:17.791168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.791422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.870 [2024-07-11 23:46:17.791449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:56.870 qpair failed and we were unable to recover it. 00:32:56.871 [2024-07-11 23:46:17.791710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.139 [2024-07-11 23:46:17.791900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.139 [2024-07-11 23:46:17.791947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.139 qpair failed and we were unable to recover it. 00:32:57.139 [2024-07-11 23:46:17.792248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.139 [2024-07-11 23:46:17.792476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.139 [2024-07-11 23:46:17.792525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.139 qpair failed and we were unable to recover it. 00:32:57.139 [2024-07-11 23:46:17.792784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.139 [2024-07-11 23:46:17.793045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.139 [2024-07-11 23:46:17.793096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.139 qpair failed and we were unable to recover it. 00:32:57.139 [2024-07-11 23:46:17.793355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.139 [2024-07-11 23:46:17.793573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.139 [2024-07-11 23:46:17.793623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.139 qpair failed and we were unable to recover it. 00:32:57.139 [2024-07-11 23:46:17.793805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.139 [2024-07-11 23:46:17.794055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.139 [2024-07-11 23:46:17.794115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.794391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.794660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.794711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.794965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.795279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.795307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.795525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.795786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.795836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.796108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.796370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.796399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.796673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.796922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.796969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.797224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.797425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.797452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.797711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.798024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.798075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.798300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.798557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.798606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.798880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.799193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.799220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.799417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.799619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.799677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.799968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.800221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.800251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.800482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.800740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.800796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.801049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.801213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.801240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.801459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.801763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.801818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.802049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.802256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.802284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.802504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.802723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.802773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.803061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.803340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.803368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.803667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.803901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.803951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.804165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.804385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.804413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.804656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.804877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.804924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.805219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.805509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.805562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.805771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.806048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.806096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.806309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.806521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.806571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.806899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.807250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.807277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.807528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.807813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.807869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.808224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.808442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.808470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.808683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.808960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.809015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.140 [2024-07-11 23:46:17.809250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.809463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.140 [2024-07-11 23:46:17.809510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.140 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.809751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.809992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.810043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.810278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.810480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.810529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.810731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.810980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.811027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.811273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.811526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.811583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.811878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.812156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.812187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.812477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.812743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.812792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.813077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.813483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.813535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.813801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.814021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.814069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.814265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.814529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.814579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.814838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.815108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.815135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.815405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.815662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.815710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.815997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.816203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.816240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.816425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.816684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.816730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.816954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.817211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.817239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.817478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.817790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.817839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.818121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.818358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.818386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.818693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.818978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.819025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.819274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.819524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.819574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.819858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.820123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.820165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.820486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.820737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.820788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.821072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.821402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.821433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.821655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.821954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.821982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.822194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.822479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.822507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.822750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.822949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.822976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.823190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.823428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.823455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.823749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.824007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.824054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.824359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.824668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.824715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.824903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.825111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.825150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.825402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.825613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.825662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.141 qpair failed and we were unable to recover it. 00:32:57.141 [2024-07-11 23:46:17.825907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.141 [2024-07-11 23:46:17.826211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.826274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.826524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.826746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.826794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.827032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.827243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.827270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.827536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.827829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.827884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.828151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.828330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.828358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.828607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.828845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.828895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.829112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.829344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.829372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.829629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.829909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.829958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.830180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.830368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.830395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.830623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.830932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.830985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.831245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.831462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.831510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.831796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.832099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.832126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.832376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.832620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.832669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.832909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.833132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.833175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.833430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.833657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.833707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.833974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.834285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.834313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.834509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.834736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.834783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.835028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.835336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.835364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.835648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.835898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.835941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.836224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.836474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.836524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.836806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.837121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.837187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.837391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.837612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.837664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.837898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.838176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.838204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.838431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.838639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.838690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.838988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.839337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.839365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.839644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.839938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.839990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.840252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.840460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.840509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.840784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.841057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.841084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.841326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.841593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.841643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.142 [2024-07-11 23:46:17.841852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.842066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.142 [2024-07-11 23:46:17.842093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.142 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.842304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.842500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.842547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.842845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.843104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.843132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.843392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.843610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.843656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.843907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.844160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.844191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.844398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.844657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.844713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.844959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.845198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.845227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.845453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.845758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.845804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.846072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.846334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.846362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.846579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.846765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.846816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.847090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.847322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.847351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.847532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.847773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.847822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.848118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.848336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.848364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.848619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.848866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.848915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.849206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.849382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.849409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.849696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.849934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.849984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.850243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.850465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.850512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.850713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.851005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.851058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.851295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.851550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.851599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.851941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.852214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.852242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.852483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.852912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.852960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.853242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.853474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.853502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.853734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.854110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.854178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.854357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.854607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.854655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.854947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.855177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.855205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.855437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.855810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.855865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.856064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.856291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.856318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.856619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.856954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.857003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.857238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.857427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.857474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.143 qpair failed and we were unable to recover it. 00:32:57.143 [2024-07-11 23:46:17.857665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.143 [2024-07-11 23:46:17.857862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.857910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.858153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.858344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.858371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.858612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.858846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.858894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.859148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.859331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.859358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.859652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.859914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.859961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.860176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.860346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.860373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.860658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.860859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.860911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.861112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.861324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.861352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.861655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.861949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.861998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.862208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.862423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.862472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.862726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.863024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.863072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.863340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.863589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.863631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.863896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.864157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.864187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.864400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.864623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.864674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.864942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.865203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.865232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.865504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.865742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.865786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.866101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.866378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.866406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.866634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.866911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.866962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.867209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.867416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.867450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.867686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.867959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.868008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.868273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.868524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.868567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.868824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.869147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.869185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.869377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.869646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.869707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.869986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.870270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.870298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.144 [2024-07-11 23:46:17.870503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.870747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.144 [2024-07-11 23:46:17.870797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.144 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.871056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.871371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.871400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.871690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.871948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.871998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.872268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.872491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.872539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.872806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.873059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.873115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.873314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.873579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.873628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.873913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.874107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.874134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.874309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.874605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.874662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.874952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.875277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.875305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.875576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.875825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.875875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.876126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.876318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.876345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.876579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.876861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.876906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.877217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.877431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.877458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.877652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.877936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.877985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.878237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.878503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.878552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.878843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.879121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.879157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.879387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.879603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.879651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.879872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.880146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.880174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.880418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.880684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.880731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.881006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.881259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.881288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.881524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.881820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.881869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.882119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.882353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.882381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.882641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.882879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.882929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.883162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.883428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.883456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.883702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.883896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.883946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.884195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.884376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.884403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.884640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.884921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.884972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.885254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.885504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.885531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.885809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.886020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.886070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.886297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.886530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.145 [2024-07-11 23:46:17.886580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.145 qpair failed and we were unable to recover it. 00:32:57.145 [2024-07-11 23:46:17.886828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.887086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.887113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.887338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.887582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.887630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.887904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.888156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.888184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.888418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.888672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.888728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.889007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.889318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.889347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.889563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.889775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.889825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.890109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.890379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.890407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.890653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.890924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.890972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.891228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.891459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.891486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.891743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.892104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.892157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.892392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.892605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.892653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.892866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.893132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.893175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.893401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.893646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.893701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.893975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.894245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.894274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.894490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.894744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.894792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.895085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.895353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.895381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.895638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.895919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.895967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.896221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.896420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.896447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.896767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.897083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.897132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.897433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.897708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.897761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.897987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.898224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.898253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.898494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.898786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.898844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.899055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.899287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.899315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.899562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.899849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.899896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.900174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.900385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.900415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.900713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.900962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.901014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.901265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.901504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.901554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.901811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.902104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.902131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.902423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.902682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.902732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.902973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.903246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.903274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.903524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.903788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.146 [2024-07-11 23:46:17.903850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.146 qpair failed and we were unable to recover it. 00:32:57.146 [2024-07-11 23:46:17.904094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.904304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.904332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.904535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.904760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.904810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.905093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.905334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.905363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.905617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.905864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.905919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.906169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.906407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.906434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.906678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.906934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.906979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.907256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.907463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.907490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.907705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.907923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.907972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.908250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.908486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.908513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.908772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.909045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.909096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.909345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.909566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.909616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.909863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.910007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.910034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.910316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.910573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.910621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.910892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.911116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.911152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.911362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.911626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.911674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.911916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.912234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.912262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.912472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.912727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.912773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.913002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.913289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.913318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.913634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.913888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.913938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.914167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.914411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.914438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.914680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.914928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.914977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.915224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.915421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.915448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.915710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.916027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.916076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.916330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.916582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.916634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.916892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.917118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.917153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.917421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.917688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.917737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.917938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.918250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.918278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.918523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.918778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.918825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.919061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.919309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.919337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.919552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.919827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.919876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.147 [2024-07-11 23:46:17.920162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.920401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.147 [2024-07-11 23:46:17.920429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.147 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.920663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.920898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.920948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.921188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.921403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.921431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.921662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.921951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.922006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.922305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.922573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.922621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.922854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.923055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.923081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.923360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.923566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.923615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.923858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.924103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.924130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.924464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.924757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.924809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.925200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.925490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.925519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.925811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.926106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.926175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.926445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.926689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.926739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.926936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.927193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.927221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.927467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.927685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.927735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.928001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.928246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.928274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.928496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.928729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.928776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.929026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.929305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.929335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.929543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.929767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.929816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.930001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.930216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.930282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.930501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.930784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.930831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.931116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.931341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.931368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.931605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.931900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.931952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.932192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.932395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.932423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.932667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.933059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.933108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.933360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.933627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.933680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.933946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.934178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.934207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.934395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.934620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.934673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.148 [2024-07-11 23:46:17.934876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.935146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.148 [2024-07-11 23:46:17.935174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.148 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.935396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.935597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.935643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.935872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.936043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.936070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.936272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.936483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.936532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.936807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.937087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.937137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.937443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.937698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.937748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.937982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.938202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.938230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.938444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.938668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.938716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.938958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.939203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.939231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.939439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.939628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.939681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.939883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.940081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.940109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.940324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.940545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.940599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.940806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.941020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.941048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.941251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.941454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.941501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.941708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.941971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.942032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.942276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.942505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.942553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.942782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.943024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.943051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.943239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.943468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.943523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.943736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.943944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.943991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.944227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.944470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.944528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.944798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.945028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.945055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.945261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.945499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.945545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.945768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.946007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.946055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.946273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.946508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.946558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.946771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.947015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.947043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.947265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.947471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.947526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.947728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.947953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.947997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.948235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.948387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.948440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.948663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.948892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.948941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.949156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.949327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.949354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.949580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.949805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.949855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.149 qpair failed and we were unable to recover it. 00:32:57.149 [2024-07-11 23:46:17.950053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.149 [2024-07-11 23:46:17.950218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.950246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.950485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.950717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.950764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.951025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.951212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.951241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.951476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.951768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.951817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.952049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.952205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.952233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.952406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.952638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.952687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.952899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.953094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.953122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.953333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.953559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.953610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.953857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.954094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.954121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.954331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.954604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.954653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.954898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.955130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.955173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.955376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.955597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.955645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.955828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.956062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.956089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.956326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.956537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.956587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.956796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.957054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.957082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.957310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.957547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.957595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.957844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.958069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.958096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.958333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.958534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.958584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.958796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.959044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.959072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.959322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.959539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.959588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.959837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.960071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.960098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.960293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.960529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.960578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.960786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.961003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.961030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.961254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.961494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.961543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.961756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.961998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.962049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.962260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.962454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.962511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.962740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.962943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.962994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.963216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.963427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.963474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.963665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.963936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.963984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.964218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.964414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.964469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.964708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.964986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.965035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.150 qpair failed and we were unable to recover it. 00:32:57.150 [2024-07-11 23:46:17.965269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.150 [2024-07-11 23:46:17.965539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.965587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.965798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.966009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.966036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.966243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.966518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.966565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.966810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.967028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.967056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.967257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.967450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.967504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.967691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.967945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.967994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.968233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.968439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.968492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.968707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.968961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.969009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.969243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.969498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.969548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.969766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.969995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.970022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.970204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.970452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.970503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.970747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.970964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.970991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.971181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.971384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.971430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.971678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.971910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.971960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.972197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.972375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.972419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.972609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.972815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.972863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.973064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.973282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.973335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.973591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.973827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.973874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.974076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.974302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.974330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.974551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.974810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.974859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.975064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.975265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.975293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.975549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.975760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.975813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.976043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.976252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.976281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.976473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.976711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.976762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.976988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.977214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.977242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.977532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.977789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.977837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.978048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.978205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.978233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.978468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.978734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.978784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.978989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.979229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.979278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.979528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.979762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.979811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.151 qpair failed and we were unable to recover it. 00:32:57.151 [2024-07-11 23:46:17.980010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.980214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.151 [2024-07-11 23:46:17.980272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.980560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.980868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.980913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.981083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.981280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.981308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.981514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.981786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.981836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.982019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.982220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.982255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.982545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.982819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.982868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.983069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.983264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.983292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.983535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.983805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.983855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.984092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.984290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.984319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.984567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.984848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.984897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.985137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.985363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.985390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.985671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.985917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.985967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.986180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.986405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.986432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.986662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.986896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.986954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.987161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.987401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.987428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.987667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.987936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.987986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.988163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.988497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.988550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.988729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.988949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.988996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.989227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.989474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.989522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.989740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.989954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.989981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.990183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.990403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.990464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.990719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.990939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.990989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.991179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.991433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.991506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.991698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.991973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.992023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.992250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.992487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.992537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.992745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.992968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.992995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.993196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.993457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.993508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.993788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.994113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.994149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.994380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.994632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.994681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.994902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.995155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.995184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.995368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.995585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.995635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.152 qpair failed and we were unable to recover it. 00:32:57.152 [2024-07-11 23:46:17.995883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.152 [2024-07-11 23:46:17.996111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.996148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:17.996356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.996600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.996647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:17.996892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.997205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.997233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:17.997464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.997719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.997768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:17.998024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.998257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.998285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:17.998504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.998745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.998796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:17.999043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.999289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.999321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:17.999600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.999859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:17.999913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.000190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.000370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.000397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.000629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.000912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.000963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.001199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.001400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.001427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.001661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.001912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.001961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.002223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.002463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.002490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.002767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.003090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.003155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.003542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.003856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.003906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.004159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.004352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.004380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.004649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.004890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.004943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.005164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.005340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.005368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.005541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.005822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.005871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.006099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.006336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.006363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.006640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.006931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.006978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.007225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.007429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.007456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.007747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.008064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.008112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.008307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.008539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.008586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.008852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.009127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.009174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.009434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.009692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.009741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.010026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.010328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.153 [2024-07-11 23:46:18.010357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.153 qpair failed and we were unable to recover it. 00:32:57.153 [2024-07-11 23:46:18.010565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.010791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.010840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.011109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.011372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.011400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.011652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.011946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.012006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.012280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.012585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.012640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.012838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.013046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.013073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.013327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.013578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.013625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.013870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.014111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.014146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.014378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.014628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.014676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.014880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.015101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.015128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.015418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.015683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.015730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.015986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.016270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.016300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.016534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.016776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.016826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.017115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.017304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.017332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.017575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.017797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.017846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.018112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.018479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.018523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.018742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.018961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.019010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.019268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.019538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.019585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.019793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.020061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.020088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.020375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.020598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.020646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.020881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.021158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.021187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.021378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.021600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.021649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.021873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.022153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.022180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.022453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.022665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.022713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.023001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.023305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.023333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.023586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.023855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.023902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.024155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.024432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.024461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.024727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.024982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.025031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.025317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.025601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.025652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.025886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.026083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.026110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.026368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.026579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.026627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.026861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.027109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.154 [2024-07-11 23:46:18.027148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.154 qpair failed and we were unable to recover it. 00:32:57.154 [2024-07-11 23:46:18.027401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.027598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.027647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.027883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.028149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.028183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.028418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.028681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.028730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.029026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.029328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.029357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.029591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.029873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.029929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.030188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.030355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.030382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.030619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.030900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.030955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.031203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.031434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.031461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.031713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.031950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.032000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.032235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.032427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.032477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.032730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.033050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.033109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.033368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.033603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.033652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.033893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.034097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.034124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.034378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.034623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.034670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.034920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.035169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.035197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.035380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.035640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.035695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.035946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.036243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.036272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.036479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.036709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.036758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.036982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.037251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.037280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.037527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.037810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.037862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.038155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.038392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.038419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.038608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.038797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.038845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.039123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.039349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.039377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.039641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.039897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.039946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.040182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.040401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.040429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.040665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.040952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.041004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.041290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.041536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.041587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.041845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.042150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.042177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.042414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.042639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.042688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.042920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.043099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.043127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.155 [2024-07-11 23:46:18.043433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.043704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.155 [2024-07-11 23:46:18.043753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.155 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.044003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.044321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.044350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.044653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.044916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.044963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.045237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.045470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.045497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.045779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.046105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.046163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.046399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.046659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.046706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.046997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.047233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.047261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.047497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.047814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.047863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.048081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.048274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.048303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.048606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.048900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.048950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.049209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.049425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.049453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.049692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.049970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.050030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.050327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.050583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.050632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.050904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.051201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.051229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.051442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.051806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.051853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.052117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.052331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.052360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.052603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.052867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.052915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.053166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.053419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.053446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.053696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.054004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.054055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.054338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.054597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.054646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.054919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.055195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.055227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.055466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.055696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.055743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.056003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.056299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.056329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.056497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.056737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.056780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.057080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.057391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.057420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.057656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.057918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.057972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.058261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.058539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.058589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.058852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.059073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.059120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.059376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.059602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.059649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.059850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.060112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.060146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.060413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.060625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.060679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.156 [2024-07-11 23:46:18.060913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.061163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.156 [2024-07-11 23:46:18.061191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.156 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.061482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.061780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.061830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.062112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.062385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.062413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.062674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.062945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.062991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.063189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.063406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.063434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.063688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.064008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.064056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.064299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.064501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.064551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.064784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.065034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.065062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.065306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.065597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.065645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.065926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.066234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.066262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.066587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.066915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.066971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.067249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.067489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.067516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.067705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.067927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.067975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.068188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.068396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.068426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.068645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.068942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.068999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.069231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.069562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.069613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.069829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.070105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.070132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.070382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.070640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.070695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.070966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.071279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.071307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.071530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.071843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.071895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.072080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.072263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.072293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.072557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.072839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.072887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.073130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.073370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.073397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.073583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.073804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.073853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.074035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.074231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.074259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.074482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.074769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.074819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.075005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.075187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.075216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.157 qpair failed and we were unable to recover it. 00:32:57.157 [2024-07-11 23:46:18.075401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.075614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.157 [2024-07-11 23:46:18.075671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.158 qpair failed and we were unable to recover it. 00:32:57.158 [2024-07-11 23:46:18.075828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.158 [2024-07-11 23:46:18.076030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.158 [2024-07-11 23:46:18.076057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.158 qpair failed and we were unable to recover it. 00:32:57.158 [2024-07-11 23:46:18.076268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.158 [2024-07-11 23:46:18.076511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.158 [2024-07-11 23:46:18.076562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.158 qpair failed and we were unable to recover it. 00:32:57.158 [2024-07-11 23:46:18.076753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.158 [2024-07-11 23:46:18.076971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.158 [2024-07-11 23:46:18.076999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.158 qpair failed and we were unable to recover it. 00:32:57.158 [2024-07-11 23:46:18.077251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.158 [2024-07-11 23:46:18.077481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.158 [2024-07-11 23:46:18.077528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.158 qpair failed and we were unable to recover it. 00:32:57.158 [2024-07-11 23:46:18.077829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.158 [2024-07-11 23:46:18.078096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.158 [2024-07-11 23:46:18.078123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.158 qpair failed and we were unable to recover it. 00:32:57.158 [2024-07-11 23:46:18.078341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.158 [2024-07-11 23:46:18.078631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.158 [2024-07-11 23:46:18.078680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.158 qpair failed and we were unable to recover it. 00:32:57.158 [2024-07-11 23:46:18.078871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.158 [2024-07-11 23:46:18.079129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.158 [2024-07-11 23:46:18.079165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.158 qpair failed and we were unable to recover it. 00:32:57.158 [2024-07-11 23:46:18.079330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.079613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.079667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.428 qpair failed and we were unable to recover it. 00:32:57.428 [2024-07-11 23:46:18.079883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.080121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.080164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.428 qpair failed and we were unable to recover it. 00:32:57.428 [2024-07-11 23:46:18.080371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.080622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.080674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.428 qpair failed and we were unable to recover it. 00:32:57.428 [2024-07-11 23:46:18.080941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.081174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.081203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.428 qpair failed and we were unable to recover it. 00:32:57.428 [2024-07-11 23:46:18.081376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.081572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.081619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.428 qpair failed and we were unable to recover it. 00:32:57.428 [2024-07-11 23:46:18.081827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.082039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.082067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.428 qpair failed and we were unable to recover it. 00:32:57.428 [2024-07-11 23:46:18.082234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.082417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.082445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.428 qpair failed and we were unable to recover it. 00:32:57.428 [2024-07-11 23:46:18.082669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.082916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.082943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.428 qpair failed and we were unable to recover it. 00:32:57.428 [2024-07-11 23:46:18.083149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.083364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.083397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.428 qpair failed and we were unable to recover it. 00:32:57.428 [2024-07-11 23:46:18.083672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.083921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.083948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.428 qpair failed and we were unable to recover it. 00:32:57.428 [2024-07-11 23:46:18.084120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.084326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.084355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.428 qpair failed and we were unable to recover it. 00:32:57.428 [2024-07-11 23:46:18.084604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.084853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.084881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.428 qpair failed and we were unable to recover it. 00:32:57.428 [2024-07-11 23:46:18.085193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.085358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.085397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.428 qpair failed and we were unable to recover it. 00:32:57.428 [2024-07-11 23:46:18.085609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.085850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.428 [2024-07-11 23:46:18.085877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.428 qpair failed and we were unable to recover it. 00:32:57.428 [2024-07-11 23:46:18.086094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.086341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.086371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.086571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.086794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.086827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.087053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.087283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.087313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.087517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.087707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.087734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.087933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.088158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.088187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.088350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.088586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.088613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.088782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.088960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.088987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.089217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.089398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.089426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.089634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.089895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.089922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.090120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.090305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.090333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.090552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.090824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.090871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.091108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.091306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.091334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.091566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.091820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.091867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.092074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.092284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.092312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.092509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.092748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.092796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.092982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.093169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.093197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.093405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.093637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.093686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.093879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.094102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.094129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.094313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.094506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.094555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.094739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.094952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.094999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.095163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.095354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.095400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.095614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.095825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.095874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.096059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.096268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.096314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.096497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.096768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.096826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.097034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.097227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.097275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.097491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.097735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.097784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.097989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.098219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.098255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.098455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.098682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.098729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.098950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.099159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.099187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.099355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.099608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.099656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.099854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.100053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.100080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.100256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.100473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.100537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.429 [2024-07-11 23:46:18.100742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.100965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.429 [2024-07-11 23:46:18.101014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.429 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.101203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.101426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.101476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.101698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.101939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.101986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.102181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.102370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.102419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.102620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.102849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.102899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.103109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.103320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.103366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.103582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.103825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.103873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.104053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.104264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.104312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.104542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.104790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.104841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.105048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.105202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.105231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.105454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.105672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.105720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.105929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.106154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.106188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.106406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.106638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.106687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.106882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.107072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.107099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.107286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.107487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.107536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.107713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.107922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.107969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.108178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.108401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.108446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.108660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.108879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.108927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.109114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.109307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.109335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.109496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.109698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.109747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.109946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.110129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.110179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.110349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.110538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.110592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.110819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.111005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.111032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.111219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.111434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.111494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.111721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.111980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.112027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.112258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.112516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.112565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.112730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.112951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.113000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.113185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.113398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.113443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.113665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.113880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.113927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.114110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.114310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.114356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.114552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.114802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.430 [2024-07-11 23:46:18.114851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.430 qpair failed and we were unable to recover it. 00:32:57.430 [2024-07-11 23:46:18.115042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.115223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.115269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.115482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.115712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.115764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.115983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.116156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.116184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.116364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.116625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.116673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.116864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.117067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.117093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.117362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.117613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.117662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.117854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.118028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.118055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.118254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.118467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.118522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.118734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.118926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.118976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.119240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.119461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.119508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.119705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.119899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.119957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.120166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.120363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.120409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.120604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.120800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.120849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.121027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.121208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.121235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.121429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.121673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.121722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.121948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.122177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.122206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.122387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.122576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.122625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.122891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.123117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.123153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.123375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.123610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.123661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.123889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.124121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.124155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.124367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.124566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.124617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.124809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.125008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.125035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.125220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.125421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.125478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.125704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.125950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.125998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.126182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.126363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.126414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.126635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.126849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.126900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.127120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.127327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.127373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.127554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.127803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.127853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.128115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.128312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.128357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.128555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.128776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.128825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.431 [2024-07-11 23:46:18.129034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.129244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-07-11 23:46:18.129272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.431 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.129518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.129733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.129781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.129962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.130172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.130201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.130395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.130640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.130690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.130954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.131178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.131206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.131413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.131639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.131687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.131868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.132077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.132104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.132291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.132532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.132579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.132770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.132986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.133034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.133219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.133502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.133561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.133738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.133964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.134018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.134205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.134465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.134520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.134705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.134946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.134997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.135207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.135369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.135413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.135620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.135816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.135865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.136072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.136256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.136303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.136495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.136713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.136763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.136980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.137167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.137195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.137403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.137637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.137690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.137879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.138080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.138107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.138354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.138575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.138630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.138862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.139097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.139125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.139327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.139543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.139593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.139797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.140019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.140046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.140252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.140449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.140498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.140718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.140965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.141012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.432 [2024-07-11 23:46:18.141195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.141406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-07-11 23:46:18.141451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.432 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.141660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.141891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.141943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.142123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.142320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.142367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.142565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.142792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.142840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.143047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.143227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.143255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.143461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.143677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.143731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.143913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.144126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.144162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.144373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.144569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.144619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.144828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.145007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.145034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.145244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.145473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.145523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.145712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.145923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.145973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.146181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.146368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.146414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.146596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.146787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.146835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.147051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.147256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.147284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.147505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.147751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.147798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.148017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.148236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.148286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.148489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.148677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.148726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.148926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.149164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.149192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.149416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.149666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.149716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.149932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.150155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.150186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.150373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.150533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.150582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.150779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.151033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.151084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.151291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.151499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.151547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.151749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.152009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.152058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.152240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.152444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.152505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.152745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.152980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.153031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.153242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.153455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.153506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.153704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.153898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.153946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.154163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.154318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.154345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.154548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.154760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.154811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.433 [2024-07-11 23:46:18.154993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.155211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-07-11 23:46:18.155239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.433 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.155417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.155638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.155688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.155894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.156101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.156129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.156353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.156554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.156603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.156793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.156997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.157025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.157213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.157408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.157453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.157635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.157823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.157873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.158031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.158226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.158275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.158464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.158719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.158768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.158948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.159129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.159165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.159318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.159510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.159559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.159787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.159987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.160014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.160233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.160433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.160489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.160716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.160940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.160989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.161203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.161395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.161440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.161636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.161861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.161913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.162094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.162282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.162328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.162564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.162793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.162843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.162990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.163214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.163243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.163466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.163721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.163770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.163956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.164136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.164171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.164355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.164585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.164634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.164840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.165045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.165072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.165250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.165492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.165542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.165731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.165949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.165999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.166214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.166466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.166514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.166701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.166920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.166970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.167181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.167379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.167429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.167648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.167853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.167904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.168119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.168308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.168335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.168516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.168702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.168751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.434 qpair failed and we were unable to recover it. 00:32:57.434 [2024-07-11 23:46:18.168968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.434 [2024-07-11 23:46:18.169209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.169237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.169437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.169683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.169733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.169970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.170211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.170240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.170455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.170653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.170704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.170932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.171137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.171177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.171393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.171638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.171687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.171890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.172129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.172165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.172344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.172562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.172612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.172803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.173048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.173100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.173329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.173561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.173612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.173833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.174036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.174063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.174248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.174451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.174511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.174742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.174955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.175006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.175198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.175437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.175488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.175706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.175928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.175979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.176199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.176370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.176419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.176640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.176881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.176932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.177154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.177340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.177367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.177566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.177809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.177856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.178046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.178251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.178280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.178477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.178727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.178776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.178975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.179221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.179255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.179461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.179681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.179729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.179941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.180124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.180158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.180355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.180573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.180623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.180837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.181069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.181096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.181322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.181513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.181563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.181761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.181991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.182039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.182259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.182493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.182543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.182730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.182975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.183022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.183172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.183373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.435 [2024-07-11 23:46:18.183421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.435 qpair failed and we were unable to recover it. 00:32:57.435 [2024-07-11 23:46:18.183627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.183849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.183897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.184084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.184277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.184323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.184518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.184759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.184808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.185029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.185215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.185264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.185481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.185711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.185764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.185946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.186156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.186190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.186401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.186603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.186653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.186834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.187060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.187088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.187269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.187439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.187495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.187688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.187908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.187963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.188153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.188338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.188365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.188564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.188802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.188851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.189056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.189236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.189264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.189489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.189717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.189766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.189959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.190131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.190174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.190368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.190594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.190644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.190825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.191033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.191060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.191263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.191494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.191543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.191737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.191978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.192028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.192209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.192386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.192431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.192609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.192799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.192848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.193057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.193235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.193282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.193500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.193717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.193767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.193988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.194216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.194251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.194451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.194661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.194713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.194945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.195157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.195186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.195355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.195597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.195647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.195837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.196058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.196085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.196270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.196434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.196485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.196704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.196903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.196954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.436 [2024-07-11 23:46:18.197163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.197332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.436 [2024-07-11 23:46:18.197377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.436 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.197542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.197767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.197816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.198004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.198205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.198275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.198466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.198673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.198722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.198920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.199150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.199178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.199364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.199549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.199597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.199799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.199986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.200039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.200227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.200397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.200444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.200656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.200897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.200949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.201163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.201321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.201349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.201565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.201759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.201809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.201985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.202179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.202233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.202402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.202619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.202671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.202894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.203123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.203157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.203343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.203540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.203587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.203804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.204005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.204033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.204225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.204426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.204477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.204699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.204940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.204989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.205177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.205385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.205431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.205621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.205874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.205923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.206104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.206332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.206377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.206608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.206832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.206883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.207074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.207264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.207292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.207518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.207736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.207785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.207992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.208225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.208276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.208475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.208736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.208788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.437 [2024-07-11 23:46:18.208993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.209187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.437 [2024-07-11 23:46:18.209236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.437 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.209473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.209735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.209782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.209943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.210123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.210158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.210334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.210550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.210601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.210754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.210973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.211025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.211240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.211452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.211501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.211667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.211907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.211953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.212167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.212348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.212393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.212601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.212789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.212839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.213060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.213249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.213278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.213475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.213648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.213705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.213925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.214167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.214195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.214376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.214583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.214633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.214827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.215038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.215065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.215257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.215446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.215498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.215685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.215936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.215987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.216165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.216342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.216388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.216558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.216770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.216820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.217027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.217232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.217279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.217473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.217687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.217737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.217927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.218106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.218133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.218344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.218549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.218596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.218806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.219042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.219069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.219254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.219446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.219498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.219687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.219882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.219931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.220137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.220348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.220375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.220529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.220788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.220837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.221048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.221227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.221256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.221430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.221619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.221669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.221884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.222087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.222114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.222302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.222466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.222515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.438 [2024-07-11 23:46:18.222745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.222988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.438 [2024-07-11 23:46:18.223036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.438 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.223220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.223375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.223423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.223609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.223795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.223846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.224029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.224208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.224258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.224488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.224687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.224735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.224916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.225065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.225092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.225313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.225532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.225577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.225790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.226006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.226033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.226176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.226403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.226447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.226627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.226846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.226896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.227056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.227265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.227314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.227478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.227707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.227756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.227949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.228100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.228127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.228335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.228522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.228576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.228746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.228949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.228976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.229128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.229331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.229376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.229549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.229761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.229812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.230022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.230203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.230238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.230453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.230707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.230759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.230955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.231183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.231216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.231393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.231581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.231631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.231793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.231973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.232001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.232168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.232378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.232425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.232643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.232829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.232876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.233020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.233218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.233253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.233451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.233671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.233717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.233937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.234157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.234185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.234358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.234580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.234630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.234856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.235033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.235060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.235260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.235461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.235518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.235720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.235937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.235987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.439 [2024-07-11 23:46:18.236172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.236429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.439 [2024-07-11 23:46:18.236486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.439 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.236691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.236890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.236938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.237113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.237311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.237357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.237559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.237750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.237800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.237988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.238206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.238234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.238460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.238709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.238759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.238973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.239158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.239186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.239381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.239606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.239655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.239839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.240011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.240039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.240249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.240425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.240469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.240646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.240835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.240892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.241070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.241256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.241303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.241486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.241698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.241748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.241927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.242110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.242148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.242343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.242558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.242610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.242806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.243008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.243035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.243220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.243415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.243472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.243675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.243892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.243938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.244114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.244351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.244397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.244613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.244832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.244881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.245097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.245322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.245367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.245590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.245813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.245861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.246070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.246221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.246250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.246446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.246678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.246724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.246940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.247171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.247199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.247383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.247629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.247676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.247851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.248058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.248085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.248270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.248436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.248501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.248708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.248958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.249008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.249167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.249392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.249435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.249616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.249835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.249881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.440 qpair failed and we were unable to recover it. 00:32:57.440 [2024-07-11 23:46:18.250089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.440 [2024-07-11 23:46:18.250263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.250308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.250535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.250753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.250802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.250979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.251202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.251251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.251464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.251650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.251701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.251932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.252113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.252149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.252367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.252559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.252609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.252784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.252995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.253044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.253262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.253453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.253503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.253727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.253930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.253978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.254162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.254354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.254401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.254574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.254802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.254850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.255060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.255250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.255278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.255513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.255728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.255775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.255951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.256154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.256187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.256401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.256630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.256680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.256893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.257066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.257093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.257294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.257551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.257600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.257793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.258041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.258091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.258305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.258535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.258593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.258783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.258985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.259034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.259229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.259476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.259525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.259753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.259995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.260045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.260268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.260530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.260579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.260761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.260921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.260970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.261152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.261333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.261360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.261545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.261792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.261841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.262044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.262262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.441 [2024-07-11 23:46:18.262290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.441 qpair failed and we were unable to recover it. 00:32:57.441 [2024-07-11 23:46:18.262442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.262632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.262679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.262904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.263106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.263133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.263356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.263594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.263643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.263859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.264040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.264067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.264248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.264472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.264519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.264738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.264978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.265026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.265237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.265494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.265542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.265771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.265996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.266023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.266230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.266407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.266456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.266660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.266873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.266921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.267125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.267328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.267380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.267616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.267872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.267922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.268134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.268361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.268409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.268627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.268884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.268932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.269163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.269360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.269413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.269606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.269823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.269870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.270048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.270227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.270255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.270476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.270721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.270769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.270964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.271162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.271190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.271336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.271529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.271577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.271787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.272006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.272055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.272261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.272458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.272505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.272721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.272916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.272965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.273115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.273283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.273311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.273522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.273716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.273763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.273976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.274209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.274244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.274480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.274697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.274746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.274958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.275165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.275193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.275341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.275557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.275606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.275831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.276027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.276055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.442 [2024-07-11 23:46:18.276235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.276445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.442 [2024-07-11 23:46:18.276497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.442 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.276726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.276974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.277021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.277229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.277437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.277497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.277719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.277911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.277960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.278147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.278366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.278394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.278594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.278817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.278867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.279042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.279228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.279256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.279456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.279710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.279756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.279971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.280172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.280231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.280459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.280716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.280766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.280950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.281131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.281170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.281381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.281594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.281644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.281836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.282056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.282088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.282316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.282559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.282608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.282792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.282970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.283018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.283228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.283436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.283484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.283680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.283924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.283973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.284182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.284425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.284483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.284693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.284915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.284964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.285170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.285354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.285382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.285573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.285812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.285861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.286071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.286277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.286305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.286497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.286712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.286761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.287000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.287221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.287272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.287484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.287674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.287724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.287930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.288162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.288191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.288402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.288630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.288678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.288892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.289090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.289117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.289338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.289565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.289613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.289791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.289998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.290025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.290209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.290392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.290419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.443 [2024-07-11 23:46:18.290618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.290837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.443 [2024-07-11 23:46:18.290886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.443 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.291095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.291251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.291298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.291521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.291737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.291786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.291981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.292189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.292218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.292408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.292627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.292675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.292867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.293042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.293069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.293223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.293396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.293441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.293618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.293836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.293883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.294093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.294309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.294336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.294524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.294741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.294788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.294969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.295153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.295181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.295399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.295627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.295677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.295886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.296116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.296155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.296355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.296595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.296644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.296873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.297076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.297103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.297292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.297489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.297545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.297745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.297992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.298040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.298263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.298490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.298539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.298760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.298977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.299027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.299230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.299387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.299432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.299654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.299854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.299902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.300107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.300305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.300351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.300583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.300837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.300887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.301097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.301298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.301326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.301522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.301771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.301822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.302029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.302237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.302265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.302477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.302718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.302766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.302977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.303132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.303166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.303342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.303568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.303615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.303809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.304007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.304034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.304241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.304471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.304519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.444 qpair failed and we were unable to recover it. 00:32:57.444 [2024-07-11 23:46:18.304736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.304942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.444 [2024-07-11 23:46:18.304993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.305193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.305422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.305473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.305692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.305940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.305986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.306197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.306387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.306436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.306642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.306863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.306912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.307097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.307275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.307321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.307539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.307786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.307836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.308043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.308227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.308255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.308408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.308592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.308643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.308870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.309101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.309128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.309320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.309534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.309583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.309781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.310007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.310038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.310250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.310445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.310497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.310665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.310911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.310960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.311145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.311329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.311356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.311532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.311742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.311789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.311995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.312191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.312237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.312422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.312666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.312715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.312910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.313114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.313155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.313358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.313600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.313650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.313856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.314036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.314063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.314245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.314482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.314531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.314734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.314940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.314990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.315223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.315465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.315512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.315726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.315954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.316003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.316213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.316446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.316506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.445 qpair failed and we were unable to recover it. 00:32:57.445 [2024-07-11 23:46:18.316698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.445 [2024-07-11 23:46:18.316916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.316964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.317148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.317356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.317383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.317590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.317747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.317796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.318006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.318206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.318256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.318479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.318674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.318725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.318920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.319116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.319151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.319340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.319494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.319539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.319767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.319986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.320033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.320256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.320466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.320518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.320750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.320925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.320953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.321135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.321293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.321320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.321520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.321767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.321813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.321995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.322196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.322246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.322468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.322684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.322732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.322942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.323150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.323178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.323383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.323585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.323633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.323824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.323995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.324023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.324231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.324450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.324499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.324683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.324902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.324952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.325156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.325340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.325385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.325568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.325813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.325863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.326064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.326257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.326285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.326468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.326683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.326734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.326965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.327228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.327262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.327445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.327657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.327706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.327900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.328126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.328177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.328366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.328574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.328621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.328847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.329054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.329080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.329258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.329485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.329535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.329733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.329990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.330041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.446 [2024-07-11 23:46:18.330249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.330476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.446 [2024-07-11 23:46:18.330527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.446 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.330714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.330969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.331019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.331229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.331449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.331498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.331696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.331906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.331955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.332135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.332352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.332398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.332622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.332838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.332887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.333095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.333335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.333386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.333618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.333847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.333896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.334079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.334237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.334265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.334460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.334683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.334732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.334952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.335177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.335204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.335367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.335591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.335638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.335820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.336002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.336029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.336212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.336439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.336496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.336687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.336908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.336958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.337166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.337394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.337439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.337623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.337844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.337892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.338114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.338302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.338330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.338522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.338735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.338785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.338988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.339166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.339194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.339371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.339562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.339612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.339807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.340016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.340043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.340262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.340512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.340561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.340742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.340946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.340973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.341183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.341402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.341444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.341676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.341932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.341981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.447 qpair failed and we were unable to recover it. 00:32:57.447 [2024-07-11 23:46:18.342198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.447 [2024-07-11 23:46:18.342404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.342467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.342696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.342954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.343002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.343207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.343391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.343437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.343633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.343852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.343901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.344092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.344285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.344314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.344524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.344744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.344793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.344999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.345236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.345284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.345497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.345704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.345754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.345960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.346149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.346185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.346375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.346599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.346649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.346865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.347047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.347074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.347287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.347518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.347568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.347762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.347971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.348020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.348261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.348490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.348540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.348746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.348971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.349018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.349254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.349455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.349504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.349718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.349954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.350003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.350208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.350439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.350488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.350669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.350872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.350922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.351099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.351295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.351340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.351558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.351753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.351799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.351994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.352229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.352265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.352493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.352671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.352720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.352919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.353081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.353108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.353294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.353523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.353571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.353767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.353973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.354000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.354175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.354385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.354432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.354651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.354869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.448 [2024-07-11 23:46:18.354920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.448 qpair failed and we were unable to recover it. 00:32:57.448 [2024-07-11 23:46:18.355125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.355306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.355352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.355575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.355766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.355816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.355996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.356177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.356227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.356413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.356655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.356710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.356920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.357112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.357156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.357380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.357587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.357635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.357819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.357998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.358024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.358198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.358437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.358495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.358712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.358951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.359001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.359233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.359493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.359541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.359700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.359932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.359982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.360173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.360372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.360419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.360582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.360837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.360883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.361087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.361298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.361325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.361519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.361722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.361772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.362001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.362202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.362236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.362452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.362669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.362718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.362917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.363118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.363150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.363320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.363518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.363567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.363771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.363954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.363982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.364164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.364348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.364394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.364569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.364818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.364868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.365086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.365262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.365290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.449 [2024-07-11 23:46:18.365516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.365777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.449 [2024-07-11 23:46:18.365831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.449 qpair failed and we were unable to recover it. 00:32:57.715 [2024-07-11 23:46:18.366040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.366213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.366241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.715 qpair failed and we were unable to recover it. 00:32:57.715 [2024-07-11 23:46:18.366408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.366596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.366645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.715 qpair failed and we were unable to recover it. 00:32:57.715 [2024-07-11 23:46:18.366855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.367031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.367059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.715 qpair failed and we were unable to recover it. 00:32:57.715 [2024-07-11 23:46:18.367245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.367490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.367539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.715 qpair failed and we were unable to recover it. 00:32:57.715 [2024-07-11 23:46:18.367751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.367955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.367982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.715 qpair failed and we were unable to recover it. 00:32:57.715 [2024-07-11 23:46:18.368165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.368349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.368394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.715 qpair failed and we were unable to recover it. 00:32:57.715 [2024-07-11 23:46:18.368605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.368790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.368840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.715 qpair failed and we were unable to recover it. 00:32:57.715 [2024-07-11 23:46:18.369048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.369239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.369267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.715 qpair failed and we were unable to recover it. 00:32:57.715 [2024-07-11 23:46:18.369459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.369694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.369744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.715 qpair failed and we were unable to recover it. 00:32:57.715 [2024-07-11 23:46:18.369941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.715 [2024-07-11 23:46:18.370148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.370176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.370388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.370622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.370669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.370897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.371131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.371166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.371347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.371568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.371618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.371844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.372053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.372081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.372269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.372459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.372509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.372739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.372955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.373004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.373206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.373414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.373471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.373695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.373913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.373966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.374152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.374336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.374364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.374591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.374839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.374888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.375078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.375291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.375319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.375514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.375724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.375773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.375946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.376156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.376188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.376332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.376532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.376579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.376769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.377005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.377054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.377202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.377439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.377495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.377705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.377929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.377984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.378168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.378415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.378462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.378673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.378918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.378967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.379175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.379401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.379444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.379629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.379848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.379901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.380112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.380303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.380332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.380505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.380722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.380772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.381003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.381205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.381233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.381413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.381600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.381647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.381863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.382091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.382118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.382346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.382547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.382594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.382819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.383049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.383076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.383255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.383448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.383517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.383741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.383956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.384006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.384195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.384412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.384469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.384698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.384902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.384949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.385163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.385386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.385431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.385655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.385903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.385953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.386160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.386325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.386371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.386567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.386790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.386839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.387048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.387206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.387234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.387435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.387636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.387686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.387902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.388097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.388124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.388362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.388534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.388586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.388770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.388988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.389036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.389250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.389468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.389517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.389698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.389910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.389960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.716 qpair failed and we were unable to recover it. 00:32:57.716 [2024-07-11 23:46:18.390149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.716 [2024-07-11 23:46:18.390331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.390358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.390566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.390725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.390773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.390992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.391230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.391280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.391473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.391682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.391731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.391915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.392093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.392120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.392319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.392527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.392577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.392807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.392987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.393014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.393166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.393407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.393467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.393697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.393907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.393959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.394134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.394323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.394351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.394581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.394828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.394878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.395064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.395240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.395268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.395460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.395700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.395750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.395934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.396145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.396180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.396394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.396585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.396633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.396812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.397362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.397394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.398026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.398245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.398292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.398503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.398662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.398712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.398894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.399077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.399105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.399329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.399518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.399569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.399767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.399970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.399997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.400174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.400360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.400405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.400629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.400875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.400925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.401114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.401298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.401345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.401586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.401799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.401852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.402059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.402210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.402238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.402442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.402693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.402745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.402966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.403216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.403251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.403481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.403732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.403787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.403994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.404238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.404286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.404452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.404703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.404752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.404958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.405166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.405196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.405405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.405645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.405693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.405903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.406054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.406082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.406270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.406472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.406521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.406733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.406966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.407016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.407204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.407464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.407519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.407673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.407867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.407917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.408124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.408318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.408369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.408561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.409216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.409247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.409895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.410108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.410137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.410312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.410507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.410558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.410748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.410975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.411003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.411211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.411420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.411477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.411688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.411929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.411979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.412187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.412370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.412420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.412615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.412829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.412878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.413081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.413287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.413317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.413522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.413754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.413803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.414015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.414171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.414200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.414426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.414684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.414734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.414912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.415108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.415135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.415353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.415573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.415623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.415783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.415939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.415967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.416124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.416290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.416335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.416485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.416729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.416780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.416953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.417156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.417185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.417342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.417514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.417572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.717 qpair failed and we were unable to recover it. 00:32:57.717 [2024-07-11 23:46:18.417767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.717 [2024-07-11 23:46:18.417996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.418024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.418240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.418446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.418504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.418737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.418988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.419039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.419228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.419399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.419442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.419662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.419881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.419931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.420113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.420277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.420324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.420508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.420754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.420802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.421009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.421205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.421254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.421487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.421708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.421758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.421914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.422116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.422156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.422385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.422575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.422625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.422838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.423051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.423079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.423262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.423444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.423500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.423690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.423915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.423965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.424180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.424378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.424423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.424634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.424796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.424847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.425055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.425213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.425241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.425440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.425651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.425699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.425902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.426103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.426132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.426315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.426477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.426536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.426761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.426942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.426970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.427197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.427416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.427473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.427688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.427938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.427988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.428197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.428348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.428399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.428589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.428834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.428861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.429064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.429252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.429280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.429483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.429739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.429788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.430011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.430188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.430254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.430474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.430697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.430747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.430921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.431126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.431173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.431363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.431549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.431597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.431806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.432018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.432051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.432221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.432394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.432440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.432646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.432855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.432905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.433105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.433288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.433317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.433497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.433707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.433756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.433972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.434219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.434270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.434454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.434702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.434751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.434970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.435159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.435188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.435377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.435609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.435658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.435849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.436065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.436093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.436246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.436391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.436418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.436615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.436856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.436904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.437714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.437896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.437947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.438131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.438306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.438335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.438523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.438774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.438823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.438978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.439134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.439181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.439337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.439528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.439577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.718 qpair failed and we were unable to recover it. 00:32:57.718 [2024-07-11 23:46:18.439747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.440575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.718 [2024-07-11 23:46:18.440607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.440802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.441031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.441058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.441216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.441394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.441440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.441619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.441807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.441857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.442049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.442214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.442266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.442437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.442655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.442706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.442926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.443128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.443176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.443335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.443522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.443573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.443782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.443988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.444015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.444197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.444359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.444387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.444587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.444818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.444869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.445093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.445261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.445307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.445480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.445699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.445743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.445913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.446113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.446152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.446401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.446622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.446671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.446858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.447065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.447093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.447272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.447471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.447529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.448371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.448612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.448659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.448827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.449003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.449031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.449231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.449389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.449435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.449652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.449850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.449877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.450082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.450276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.450323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.450496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.450720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.450777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.450966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.451149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.451177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.451362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.451563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.451622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.451805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.452001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.452028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.452214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.452380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.452427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.452604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.452823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.452873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.453084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.453261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.453309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.453510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.453719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.453771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.453928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.454114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.454151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.454337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.454535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.454584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.454776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.454974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.455002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.455214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.455420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.455478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.455658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.455844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.455895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.456075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.456237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.456285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.456492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.456705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.456756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.456940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.457137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.457175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.457362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.457579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.457631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.457783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.457974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.458024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.458223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.458436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.458503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.458726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.458953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.459003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.459205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.459407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.459455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.459637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.459861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.459913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.460090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.460269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.460316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.460493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.460744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.460794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.461005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.461166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.461194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.461343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.461520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.461580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.461787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.462021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.462048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.462232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.462434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.462490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.462716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.462893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.462920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.463104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.463307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.463353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.463550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.463776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.463824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.464000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.464204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.464250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.464421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.464605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.464654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.464841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.464989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.465016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.465243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.465451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.465511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.465720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.465917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.465944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.466103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.466304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.466352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.719 qpair failed and we were unable to recover it. 00:32:57.719 [2024-07-11 23:46:18.466534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.719 [2024-07-11 23:46:18.466758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.466808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.466987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.467518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.467548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.467746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.467977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.468022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.468171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.468342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.468391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.468586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.468813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.468860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.469065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.469266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.469312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.469521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.469740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.469789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.469981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.470168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.470198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.470356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.470524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.470581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.470814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.471023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.471050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.471234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.471432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.471491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.471711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.471935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.471984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.472167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.472330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.472380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.472553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.472773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.472823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.473015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.473199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.473234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.473398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.473611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.473660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.474404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.474621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.474673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.474847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.475056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.475083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.475240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.475421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.475487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.475691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.475856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.475884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.476058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.476228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.476277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.476449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.476686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.476736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.476917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.477060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.477087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.477256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.477454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.477513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.477725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.477900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.477949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.478123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.478285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.478334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.478530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.478754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.478807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.478983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.479209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.479238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.479796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.480006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.480035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.480226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.480388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.480416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.480609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.480778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.480825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.480978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.481175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.481204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.481385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.481584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.481635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.481790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.481971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.481997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.482206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.482367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.482395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.482581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.482786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.482834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.483015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.483205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.483261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.483416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.483608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.483659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.483818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.483969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.483997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.484184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.484349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.484377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.484579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.484772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.484822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.485006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.485214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.485242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.485401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.485598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.485648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.485806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.485984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.486011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.486201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.486410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.486437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.486640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.486782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.486828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.486986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.487214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.487243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.487428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.487610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.487638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.487864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.488086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.488114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.488308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.488475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.488526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.488700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.488923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.488972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.489147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.489331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.489358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.720 qpair failed and we were unable to recover it. 00:32:57.720 [2024-07-11 23:46:18.489540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.720 [2024-07-11 23:46:18.489752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.489801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.490012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.490218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.490265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.490488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.490708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.490758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.490969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.491204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.491238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.491460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.491717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.491765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.491924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.492118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.492155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.492310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.492535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.492599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.492812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.492985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.493013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.493202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.493400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.493427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.493579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.493768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.493818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.493999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.494205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.494241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.494425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.494623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.494668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.494865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.495048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.495075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.495239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.495455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.495505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.495729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.495959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.495987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.496203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.496372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.496400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.496600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.496813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.496863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.497044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.497249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.497277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.497451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.497643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.497694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.497880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.498077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.498104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.498296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.498467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.498523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.498725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.498877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.498927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.499109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.500069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.500102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.500280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.501109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.501152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.501310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.502106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.502137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.502357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.502561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.502611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.502845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.503048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.503075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.503238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.503386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.503414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.503607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.503832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.503880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.504060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.504216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.504264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.504454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.504683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.504733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.504942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.505127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.505164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.505332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.505549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.505598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.505781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.505987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.506014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.506200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.506418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.506463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.506681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.506889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.506942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.507103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.507263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.507310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.507497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.507706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.507753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.507975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.508212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.508241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.508397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.508657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.508711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.508904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.509098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.509126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.509293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.509464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.509521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.509716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.509932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.509980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.510185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.510435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.510486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.510709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.510960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.511009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.511171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.511362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.511414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.511630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.511878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.511926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.512133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.512325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.512352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.512572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.512828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.512878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.513089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.513242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.513272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.513518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.513743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.513795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.514020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.514197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.514225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.514395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.514583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.514634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.514837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.515047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.515074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.515295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.515517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.515566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.515761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.515940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.515968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.516159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.516322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.516349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.516558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.516720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.516773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.721 qpair failed and we were unable to recover it. 00:32:57.721 [2024-07-11 23:46:18.516990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.517203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.721 [2024-07-11 23:46:18.517237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.517421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.517597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.517644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.517823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.518006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.518034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.518194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.518354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.518387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.518579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.518834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.518884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.519094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.519275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.519323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.519547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.519761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.519812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.520003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.520198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.520247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.520442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.520635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.520683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.520911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.521115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.521149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.521325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.521501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.521550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.521728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.521926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.521973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.522158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.522344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.522392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.522582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.522794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.522844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.523004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.523168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.523197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.523381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.523545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.523593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.523812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.524013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.524040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.524194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.524414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.524471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.524695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.524934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.524983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.525146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.525333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.525361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.525540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.525785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.525835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.526018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.526182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.526212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.526395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.526615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.526665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.526874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.527081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.527109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.527289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.527497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.527549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.527754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.528001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.528049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.528252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.528488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.528543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.528724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.528890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.528939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.529162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.529324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.529357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.529580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.529805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.529855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.530059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.530222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.530250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.530423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.530643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.530691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.530886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.531064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.531090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.531311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.531479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.531527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.531753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.531924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.531952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.532168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.532352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.532405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.532607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.532801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.532849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.532998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.533208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.533254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.533461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.533663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.533713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.533924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.534097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.534124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.534323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.534539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.534589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.534774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.534972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.535000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.535207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.535428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.535472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.535647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.535869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.535921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.536096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.536273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.536320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.536515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.536727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.536776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.536952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.537122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.537158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.537344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.537572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.537620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.537841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.538013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.538040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.538229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.538435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.538487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.538677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.538899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.538949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.539108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.539271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.539298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.539528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.539773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.539824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.539979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.540128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.540165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.540332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.540533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.540582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.540761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.540976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.541023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.541255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.541457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.541484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.722 qpair failed and we were unable to recover it. 00:32:57.722 [2024-07-11 23:46:18.541691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.541908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.722 [2024-07-11 23:46:18.541958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.542173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.542337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.542384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.542572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.542772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.542821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.543025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.543217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.543265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.543435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.543635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.543685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.543902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.544125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.544161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.544321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.544508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.544556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.544753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.545000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.545047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.545230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.545446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.545495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.545692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.545933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.545982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.546194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.546363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.546412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.546643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.546895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.546943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.547125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.547292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.547320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.547560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.547753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.547803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.548010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.548159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.548187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.548365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.548622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.548671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.548893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.549116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.549152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.549316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.549552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.549600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.549782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.549945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.549996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.550194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.550372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.550422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.550637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.550866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.550913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.551117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.551303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.551348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.551530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.551715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.551770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.551994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.552229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.552275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.552475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.552684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.552735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.552936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.553169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.553197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.553375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.553593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.553643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.553820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.554018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.554045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.554232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.554446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.554499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.554722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.554943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.554993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.555201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.555388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.555434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.555641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.555883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.555933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.556147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.556363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.556409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.556629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.556841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.556891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.557099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.557260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.557307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.557500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.557745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.557793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.557984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.558225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.558277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.558433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.558723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.558772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.558982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.559194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.559222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.559450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.559667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.559716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.559905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.560101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.560128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.560348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.560542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.560604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.560786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.561011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.561059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.561285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.561451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.561497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.561714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.561963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.562014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.562203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.562466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.562516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.562731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.562934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.562984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.563195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.563433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.563491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.563700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.563939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.563989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.564214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.564468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.564525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.564702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.564913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.564963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.565183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.565349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.565395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.565577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.565811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.565861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.566057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.566239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.566268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.723 [2024-07-11 23:46:18.566474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.566630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.723 [2024-07-11 23:46:18.566682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.723 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.566917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.567117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.567152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.567369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.567537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.567584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.567773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.568004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.568030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.568234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.568440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.568496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.568696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.568914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.568963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.569157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.569375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.569421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.569626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.569876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.569926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.570148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.570336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.570364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.570554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.570773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.570823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.571009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.571189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.571217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.571432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.571691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.571739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.571946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.572109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.572137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.572326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.572527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.572576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.572764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.572985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.573034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.573254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.573468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.573523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.573726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.573917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.573964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.574160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.574383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.574429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.574653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.574874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.574922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.575124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.575289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.575320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.575498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.575714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.575764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.575990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.576219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.576247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.576415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.576669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.576720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.576919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.577155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.577183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.577397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.577615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.577665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.577887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.578088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.578115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.578335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.578535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.578586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.578783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.579013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.579040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.579230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.579440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.579495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.579713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.579958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.580010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.580219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.580446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.580495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.580710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.580930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.580979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.581188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.581373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.581406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.581638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.581886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.581934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.582113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.582303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.582332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.582564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.582815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.582865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.583049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.583230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.583257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.583464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.583712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.583760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.583948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.584153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.584181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.584363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.584553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.584603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.584818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.585020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.585069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.585277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.585475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.585525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.585706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.585909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.585961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.586185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.586398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.586442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.586656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.586842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.586891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.587066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.587278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.587306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.587470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.587707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.587756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.587981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.588204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.588262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.588448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.588649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.588698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.588920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.589154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.589182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.589391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.589619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.589670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.589846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.590071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.590098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.590273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.590460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.590503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.590697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.590910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.590957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.591170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.591364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.591408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.591632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.591843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.591891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.592046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.592225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.592253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.592442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.592708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.592755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.592947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.593173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.593201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.724 qpair failed and we were unable to recover it. 00:32:57.724 [2024-07-11 23:46:18.593384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.593604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.724 [2024-07-11 23:46:18.593654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.593806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.593963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.593990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.594203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.594386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.594431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.594627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.594809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.594871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.595052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.595259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.595304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.595487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.595652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.595701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.595914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.596147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.596175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.596363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.596608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.596657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.596883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.597092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.597120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.597309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.597502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.597549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.597717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.597942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.597989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.598201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.598459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.598509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.598714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.598924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.598973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.599183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.599376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.599423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.599634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.599848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.599898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.600072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.600280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.600308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.600525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.600721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.600770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.600979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.601159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.601187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.601390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.601622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.601671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.601885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.602075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.602102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.602293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.602522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.602570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.602787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.603017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.603070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.603278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.603467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.603522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.603741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.603941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.603989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.604207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.604428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.604473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.604633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.604861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.604912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.605119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.605322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.605366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.605585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.605820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.605870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.606080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.606285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.606314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.606499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.606689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.606737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.606968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.607187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.607239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.607457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.607700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.607749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.607945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.608124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.608158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.608314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.608548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.608599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.608791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.608995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.609022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.609189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.609441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.609491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.609713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.609954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.610005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.610189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.610344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.610390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.610587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.610836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.610883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.611097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.611295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.611341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.611562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.611753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.611803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.611982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.612160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.612189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.612382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.612594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.612643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.612852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.613024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.613051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.613231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.613475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.613525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.613733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.613924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.613974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.614184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.614430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.614490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.614686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.614925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.614975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.615202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.615364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.615410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.615624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.615820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.615868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.616086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.616245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.616273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.616477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.616685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.616734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.616976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.617199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.725 [2024-07-11 23:46:18.617261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.725 qpair failed and we were unable to recover it. 00:32:57.725 [2024-07-11 23:46:18.617430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.617596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.617646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.617816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.618023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.618051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.618262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.618507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.618554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.618751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.618944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.618971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.619186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.619391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.619436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.619667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.619913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.619960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.620152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.620309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.620337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.620532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.620774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.620824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.621036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.621228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.621256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.621462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.621713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.621761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.621944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.622158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.622189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.622384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.622578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.622626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.622832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.622982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.623009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.623168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.623362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.623408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.623596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.623810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.623862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.624015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.624252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.624287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.624515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.624709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.624759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.624944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.625123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.625159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.625350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.625588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.625636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.625811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.625983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.626015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.626196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.626428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.626489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.626710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.626958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.627008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.627234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.627489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.627539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.627772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.627974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.628001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.628182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.628366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.628412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.628615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.628828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.628877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.629084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.629293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.629348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.629571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.629789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.629839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.630027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.630219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.630271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.630495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.630748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.630797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.630987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.631137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.631245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.631435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.631653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.631702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.631881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.632063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.632091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.632290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.632460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.632547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.632774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.632967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.632995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.633210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.633464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.633520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.633696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.633885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.633936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.634086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.634281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.634329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.634515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.634740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.634791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.635001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.635229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.635278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.635459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.635686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.635732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.635927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.636099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.636127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.636296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.636539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.636588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.636789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.636961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.636988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.637172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.637379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.637425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.637597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.637784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.637834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.638035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.638207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.638257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.638457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.638685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.638736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.638918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.639097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.639124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.639329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.639564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.639615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.639811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.639977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.640004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.640196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.640412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.640455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.640662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.640853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.640904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.641089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.641276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.641322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.641515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.641734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.641784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.641956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.642136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.642179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.642360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.642580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.642631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.642827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.643025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.643053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.726 qpair failed and we were unable to recover it. 00:32:57.726 [2024-07-11 23:46:18.643260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.643444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.726 [2024-07-11 23:46:18.643501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.643685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.643936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.643985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.644193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.644436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.644495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.644721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.644931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.644981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.645162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.645356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.645403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.645606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.645820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.645867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.646058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.646276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.646324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.646515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.646735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.646786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.646993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.647195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.647247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.647464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.647719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.647767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.647976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.648184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.648211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.648386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.648562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.648610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.648816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.649041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.649073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.649234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.649435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.649493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.649711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.649910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.649960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.650170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.650322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.650367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.650556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.650759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.650809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.651016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.651170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.651198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.651375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.651597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.651646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.651845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.652047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.652074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.652261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.652463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.652508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.652741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.652968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.652995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.653175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.653333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.653388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.653620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.653812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.653863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.654021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.654212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.654264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.654472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.654736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.654785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.654997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.655225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.655260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.655521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.655714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.655774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.655950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.656109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.656136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.656342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.656590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.656638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.656845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.657024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.657052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:57.727 [2024-07-11 23:46:18.657241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.657416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.727 [2024-07-11 23:46:18.657460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:57.727 qpair failed and we were unable to recover it. 00:32:58.001 [2024-07-11 23:46:18.657625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.657881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.657928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.001 qpair failed and we were unable to recover it. 00:32:58.001 [2024-07-11 23:46:18.658116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.658324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.658371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.001 qpair failed and we were unable to recover it. 00:32:58.001 [2024-07-11 23:46:18.658579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.658755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.658783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.001 qpair failed and we were unable to recover it. 00:32:58.001 [2024-07-11 23:46:18.658987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.659165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.659194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.001 qpair failed and we were unable to recover it. 00:32:58.001 [2024-07-11 23:46:18.659346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.659568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.659616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.001 qpair failed and we were unable to recover it. 00:32:58.001 [2024-07-11 23:46:18.659807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.660037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.660064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.001 qpair failed and we were unable to recover it. 00:32:58.001 [2024-07-11 23:46:18.660255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.660405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.660433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.001 qpair failed and we were unable to recover it. 00:32:58.001 [2024-07-11 23:46:18.660612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.660829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.660856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.001 qpair failed and we were unable to recover it. 00:32:58.001 [2024-07-11 23:46:18.661040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.661224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.661270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.001 qpair failed and we were unable to recover it. 00:32:58.001 [2024-07-11 23:46:18.661470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.661715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.001 [2024-07-11 23:46:18.661764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.001 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.661947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.662128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.662170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.662368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.662545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.662592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.662809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.663007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.663034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.663224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.663443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.663497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.663704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.663904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.663954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.664133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.664329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.664357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.664532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.664742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.664791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.664972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.665225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.665260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.665459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.665662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.665712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.665911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.666137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.666179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.666374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.666595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.666645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.666821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.667027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.667055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.667232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.667451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.667506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.667732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.667954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.668005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.668190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.668412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.668469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.668697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.668915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.668965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.669176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.669342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.669388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.669614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.669807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.669857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.670038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.670207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.670265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.670486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.670737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.670788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.670972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.671154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.671182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.671349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.671599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.671648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.671836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.672038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.672065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.672218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.672405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.672454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.672661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.672858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.672906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.673096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.673252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.673279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.673471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.673689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.673740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.673965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.674238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.674294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.674501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.674691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.674741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.674919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.675129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.675171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.002 qpair failed and we were unable to recover it. 00:32:58.002 [2024-07-11 23:46:18.675402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.002 [2024-07-11 23:46:18.675638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.675689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.675913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.676109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.676149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.676332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.676549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.676606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 391192 Killed "${NVMF_APP[@]}" "$@" 00:32:58.003 [2024-07-11 23:46:18.676802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.677031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.677058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 23:46:18 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:32:58.003 [2024-07-11 23:46:18.677221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 23:46:18 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:58.003 [2024-07-11 23:46:18.677375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.677420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 23:46:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:58.003 [2024-07-11 23:46:18.677587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 23:46:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:58.003 [2024-07-11 23:46:18.677789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.677840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 23:46:18 -- common/autotest_common.sh@10 -- # set +x 00:32:58.003 [2024-07-11 23:46:18.678052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.678259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.678295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.678525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.678721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.678768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.678979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.680371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.680405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.680605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.680826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.680874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.681062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.681251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.681283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 23:46:18 -- nvmf/common.sh@469 -- # nvmfpid=391888 00:32:58.003 [2024-07-11 23:46:18.681488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 23:46:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:58.003 23:46:18 -- nvmf/common.sh@470 -- # waitforlisten 391888 00:32:58.003 [2024-07-11 23:46:18.681701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.681752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 23:46:18 -- common/autotest_common.sh@819 -- # '[' -z 391888 ']' 00:32:58.003 23:46:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.003 [2024-07-11 23:46:18.681942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 23:46:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:58.003 [2024-07-11 23:46:18.682153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.682188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 23:46:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.003 23:46:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:58.003 [2024-07-11 23:46:18.682343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 23:46:18 -- common/autotest_common.sh@10 -- # set +x 00:32:58.003 [2024-07-11 23:46:18.682567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.682618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.683056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.683274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.683321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.683519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.683769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.683821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.684002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.684209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.684256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.684423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.684669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.684719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.684899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.685106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.685134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.685339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.685588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.685638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.685834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.686051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.686078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.686290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.686494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.686543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.686727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.686920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.686972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.687179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.687400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.687449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.687669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.687910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.687962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.688113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.688302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.688330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.688535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.688738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.688786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.003 qpair failed and we were unable to recover it. 00:32:58.003 [2024-07-11 23:46:18.688981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.003 [2024-07-11 23:46:18.689209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.689257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.689481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.689706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.689755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.689954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.690181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.690211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.690394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.690590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.690639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.690860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.691065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.691093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.691266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.691472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.691522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.691717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.691916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.691963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.692168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.692328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.692373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.692570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.692818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.692870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.693023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.693218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.693264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.693454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.693676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.693728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.693913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.694119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.694158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.694335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.694542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.694591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.694776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.694978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.695005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.695210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.695452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.695513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.695737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.695911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.695959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.696167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.696384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.696428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.696622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.696865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.696915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.697097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.697335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.697365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.697550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.697723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.697774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.697993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.698216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.698271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.698485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.698693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.698745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.698937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.699112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.699147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.699305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.699480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.699537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.699763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.699976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.700003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.700162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.700335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.700382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.700572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.700824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.700886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.701096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.701311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.701357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.701580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.701831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.701880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.702083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.702241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.702269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.702457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.702676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.702726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.004 qpair failed and we were unable to recover it. 00:32:58.004 [2024-07-11 23:46:18.702908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.004 [2024-07-11 23:46:18.703087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.703114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.703292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.703499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.703555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.703713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.703908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.703969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.704200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.704408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.704454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.704637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.704826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.704876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.705030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.705216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.705267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.705462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.705682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.705731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.705909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.706107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.706134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.706313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.706510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.706558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.706751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.706950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.706977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.707154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.707349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.707398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.707602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.707782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.707833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.708025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.708185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.708214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.708380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.708581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.708631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.708795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.708990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.709018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.709214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.709423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.709468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.709678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.709859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.709907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.710116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.710326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.710377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.710566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.710777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.710825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.711026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.711222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.711274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.711481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.711711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.711763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.711982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.712207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.712257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.712449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.712690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.712742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.712921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.713068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.713095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.713296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.713524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.713577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.713805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.713986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.714013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.714193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.714413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.005 [2024-07-11 23:46:18.714473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.005 qpair failed and we were unable to recover it. 00:32:58.005 [2024-07-11 23:46:18.714672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.714888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.714936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.715091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.715256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.715291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.715526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.715734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.715783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.715937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.716206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.716258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.716482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.716727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.716778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.716980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.717132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.717166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.717357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.717550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.717601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.717813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.718008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.718036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.718247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.718477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.718525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.718705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.718914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.718965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.719185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.719372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.719416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.719610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.719822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.719869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.720046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.720226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.720255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.720445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.720668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.720712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.720910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.721094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.721121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.721334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.721563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.721616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.721812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.722021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.722048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.722229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.722445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.722498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.722705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.722893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.722941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.723114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.723322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.723368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.723570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.723753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.723803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.723989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.724188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.724240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.724422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.724616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.724665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.724838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.725061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.725089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.725257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.725472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.725525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.725716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.725933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.725986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.726183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.726395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.726445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.726638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.726854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.726907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.006 [2024-07-11 23:46:18.727088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.727307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.006 [2024-07-11 23:46:18.727354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.006 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.727557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.727758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.727805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.727997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.728229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.728265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.728466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.728689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.728739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.728887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.729071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.729098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.729292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.729525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.729575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.729772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.729951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.729978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.730161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.730353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.730405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.730576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.730829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.730880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.731036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.731198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.731232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.731449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.731678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.731729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.731894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.732063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.732090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.732265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.732470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.732528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.732708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.732898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.732947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.733134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.733272] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:58.007 [2024-07-11 23:46:18.733356] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:[2024-07-11 23:46:18.733353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.007 [2024-07-11 23:46:18.733405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.733627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.733838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.733888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.734066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.734280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.734309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.734521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.734717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.734765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.734983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.735193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.735227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.735397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.735597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.735656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.735882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.736087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.736114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.736306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.736511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.736562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.736766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.736972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.737000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.737214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.737457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.737508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.737703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.737885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.737933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.738146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.738314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.738362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.738598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.738817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.738875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.739091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.739279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.739308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.739540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.739790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.739842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.740041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.740220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.740250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.007 [2024-07-11 23:46:18.740456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.740676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.007 [2024-07-11 23:46:18.740727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.007 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.740922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.741096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.741124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.741325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.741534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.741583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.741790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.741995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.742023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.742233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.742405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.742453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.742622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.742812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.742870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.743052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.743259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.743293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.743492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.743717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.743769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.743978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.744134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.744181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.744379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.744577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.744627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.744816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.745022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.745051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.745233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.745415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.745459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.745660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.745869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.745918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.746105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.746337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.746391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.746583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.746803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.746853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.747035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.747204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.747254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.747460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.747646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.747697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.748115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.748392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.748426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.748620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.748839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.748889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.749073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.749280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.749328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.749501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.749744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.749795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.749980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.750190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.750218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.750380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.750599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.750658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.750841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.751051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.751078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.751266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.751456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.751507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.751716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.751924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.751974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.752148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.752339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.752367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.752559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.752763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.752812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.753022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.753200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.753229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.753448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.753655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.753702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.753923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.754155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.754183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.008 qpair failed and we were unable to recover it. 00:32:58.008 [2024-07-11 23:46:18.754365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.008 [2024-07-11 23:46:18.754554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.754602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.754825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.755053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.755081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.755265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.755439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.755483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.755712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.755966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.756015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.756195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.756346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.756392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.756618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.756835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.756885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.757069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.757260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.757307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.757512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.757762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.757812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.758006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.758203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.758263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.758449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.758690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.758739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.758933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.759146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.759174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.759379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.759604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.759655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.759834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.760040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.760068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.760240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.760478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.760525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.760711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.760932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.760982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.761163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.761329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.761375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.761585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.761778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.761828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.761987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.762175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.762227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.762411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.762657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.762685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.762896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.763123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.763158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.763308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.763492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.763543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.763772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.763978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.764005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.764183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.764437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.764488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.764686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.764939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.764988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.765192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.765382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.765428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.765651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.765859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.765909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.766120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.766314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.766360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.766558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.766818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.766868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.767054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.767234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.767262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.767482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.767701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.767751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.009 qpair failed and we were unable to recover it. 00:32:58.009 [2024-07-11 23:46:18.767979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.009 [2024-07-11 23:46:18.768224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.768260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.768479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.768716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.768769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.768950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.769159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.769187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.769356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.769608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.769657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.769877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.770045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.770072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.770282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.770497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.770546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.770725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.770914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.770961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.771151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.771372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.771406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.771622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.771838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.771890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.772103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.772292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.772340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.772552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.772744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.772792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.773016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.773245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.773273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.773492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.773688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.773737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.773915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.774165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.774193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.774378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.774539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.774588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.774777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.775009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.775037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.775248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.775453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.775504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.775733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.775986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.776040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.776253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.776454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.776505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.776730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.776983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.777031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.777215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.777437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.777486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.777714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.777963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.778011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.778198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.778386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.778431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.778652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.778901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.778952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.779127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.779288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.779334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.779535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.779791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.779841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.010 [2024-07-11 23:46:18.780046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.780257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.010 [2024-07-11 23:46:18.780285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.010 qpair failed and we were unable to recover it. 00:32:58.011 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.011 [2024-07-11 23:46:18.780512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.780734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.780783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.780996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.781191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.781240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.781411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.781596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.781644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.781816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.782020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.782048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.782265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.782488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.782537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.782754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.782976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.783004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.783188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.783393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.783439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.783622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.783824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.783874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.784085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.784300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.784328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.784526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.784742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.784789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.784995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.785190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.785225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.785429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.785648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.785696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.785868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.786064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.786091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.786290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.786482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.786531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.786748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.786941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.786968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.787118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.787291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.787340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.787535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.787777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.787827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.788032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.788210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.788259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.788458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.788671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.788719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.788905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.789081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.789108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.789292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.789544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.789595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.789788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.790015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.790043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.790260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.790501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.790550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.790693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.790912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.790962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.791147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.791354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.791382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.791567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.791816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.791863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.792070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.792250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.792277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.792458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.792678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.792726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.792955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.793180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.793209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.793411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.793628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.793677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.011 qpair failed and we were unable to recover it. 00:32:58.011 [2024-07-11 23:46:18.793900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.011 [2024-07-11 23:46:18.794096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.794124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.794315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.794524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.794573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.794763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.794980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.795030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.795238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.795428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.795474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.795702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.795948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.795995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.796202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.796416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.796443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.796659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.796919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.796967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.797184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.797403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.797430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.797657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.797907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.797956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.798137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.798329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.798356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.798556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.798800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.798848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.799029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.799236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.799270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.799472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.799687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.799736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.799917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.800121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.800155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.800346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.800566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.800615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.800786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.801022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.801072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.801291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.801523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.801572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.801774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.802007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.802057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.802348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.802553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.802600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.802811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.803006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.803033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.803257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.803509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.803564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.803779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.803963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.803994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.804194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.804453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.804513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.804729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.804935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.804962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.805127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.805312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.805340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.805524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.805710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.805759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.805977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.806213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.806263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.806449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.806693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.806742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.806938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.807152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.807180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.807366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.807584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.807632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.807850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.808068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.808095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.012 qpair failed and we were unable to recover it. 00:32:58.012 [2024-07-11 23:46:18.808317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.012 [2024-07-11 23:46:18.808542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.808589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.808796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.809019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.809047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.809258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.809500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.809548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.809745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.809986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.810034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.810222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.810469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.810517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.810713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.810971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.811020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.811231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.811432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.811492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.811708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.811915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.811963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.812117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.812344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.812389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.812584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.812773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.812821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.813037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.813232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.813279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.813509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.813732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.813780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.813963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.814172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.814201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.814436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.814690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.814739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.814947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.815130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.815246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.815459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.815622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.815669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.815875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.816098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.816126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.816284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.816479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.816529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.816736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.816937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.816987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.817173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.817382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.817428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.817642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.817881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.817930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.818147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.818322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.818369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.818545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.818736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.818783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.818948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.819196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.819231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.819438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.819655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.819706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.819930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.820162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.820190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.820374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.820586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.820636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.820831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.821005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.821032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.821208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.821452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.821500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.821697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.821941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.821990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.822201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.822438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.013 [2024-07-11 23:46:18.822494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.013 qpair failed and we were unable to recover it. 00:32:58.013 [2024-07-11 23:46:18.822715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.822935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.822962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.823145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.823351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.823379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.823581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.823803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.823852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.824006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.824232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.824282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.824521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.824735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.824786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.824966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.825174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.825203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.825378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.825595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.825643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.825846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.826053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.826081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.826273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.826509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.826558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.826768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.826973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.827023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.827233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.827401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.827448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.827630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.827785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.827812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.828028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.828236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.828264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.828448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.828663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.828690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.828885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.829109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.829136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.829301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.829516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.829563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.829789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.829993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.830021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.830229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.830476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.830525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.830755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.830965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.830992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.831176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.831384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.831431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.831627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.831841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.831891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.832076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.832293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.832337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.832541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.832758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.832808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.833013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.833206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.833263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.833493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.833715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.833762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.833976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.834162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.834191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.014 qpair failed and we were unable to recover it. 00:32:58.014 [2024-07-11 23:46:18.834431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.014 [2024-07-11 23:46:18.834660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.834709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.834866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.835045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.835072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.835230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.835462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.835511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.835720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.835965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.836015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.836236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.836473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.836524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.836707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.836898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.836946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.837153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.837343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.837388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.837554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.837771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.837820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.838006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.838213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.838262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.838484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.838679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.838727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.838955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.839152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.839181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.839388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.839580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.839628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.839847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.840070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.840097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.840317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.840553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.840602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.840794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.841023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.841050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.841262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.841503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.841553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.841787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.841983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.842010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.842170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.842388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.842433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.842656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.842903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.842952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.843167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.843339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.843367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.843557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.843742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.843790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.843997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.844188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.844236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.844416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.844624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.844672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.844888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.845085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.845113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.845345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.845587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.845637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.845842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.846056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.846083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.846293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.846524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.846576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.846799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.847029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.847057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.847220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.847444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.847501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.847710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.847907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.847954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.848165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.848356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.848400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.015 qpair failed and we were unable to recover it. 00:32:58.015 [2024-07-11 23:46:18.848617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.015 [2024-07-11 23:46:18.848833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.848883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.849074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.849255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.849283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.849503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.849700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.849749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.849938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.850117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.850156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.850366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.850557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.850612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.850818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.851021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.851048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.851261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.851464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.851517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.851712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.851911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.851959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.852151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.852308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.852336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.852566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.852781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.852831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.853016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.853194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.853223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.853441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.853671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.853718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.853908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.854134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.854127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:58.016 [2024-07-11 23:46:18.854179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.854392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.854548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.854593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.854777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.854994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.855049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.855236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.855422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.855491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.855702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.856318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.856349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.856554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.856750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.856800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.856983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.857172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.857201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.857387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.857633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.857683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.857874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.858097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.858124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.858294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.858474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.858535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.858746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.858941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.858991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.859201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.859416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.859462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.859661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.859873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.859923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.860108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.860327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.860375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.860527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.860704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.860760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.860953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.861209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.861268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.861436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.861649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.861700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.861897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.862074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.862101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.016 qpair failed and we were unable to recover it. 00:32:58.016 [2024-07-11 23:46:18.862288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.862455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.016 [2024-07-11 23:46:18.862505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.862702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.862952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.863001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.863208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.863386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.863413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.863601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.863800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.863848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.864066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.864235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.864281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.864489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.864733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.864782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.864967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.865172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.865200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.865346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.865588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.865638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.865820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.866000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.866027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.866235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.866426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.866470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.866652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.866897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.866944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.867126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.867331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.867377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.867598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.867772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.867822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.868029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.868212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.868240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.868419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.868665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.868715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.868888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.869064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.869090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.869293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.869489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.869539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.869727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.869945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.869995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.870192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.870406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.870467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.870693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.870922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.870949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.871125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.871335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.871381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.871610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.871829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.871879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.872087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.872279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.872326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.872555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.872805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.872855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.873037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.873246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.873274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.873477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.873678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.873728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.873912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.874113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.874147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.874304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.874460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.874507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.874697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.874881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.874931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.875107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.875279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.875308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.875529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.875701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.875752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.017 qpair failed and we were unable to recover it. 00:32:58.017 [2024-07-11 23:46:18.875925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.876092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.017 [2024-07-11 23:46:18.876119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.876317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.876544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.876593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.876781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.876988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.877015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.877223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.877483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.877533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.877740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.877925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.877956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.878146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.878329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.878356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.878531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.878737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.878785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.878995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.879171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.879226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.879413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.879655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.879707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.879919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.880101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.880128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.880315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.880498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.880545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.880756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.881004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.881051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.881276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.881533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.881583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.881768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.881970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.882017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.882254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.882506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.882560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.882780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.882977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.883005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.883199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.883420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.883463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.883648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.883801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.883854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.884070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.884262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.884296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.884507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.884682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.884733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.884943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.885121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.885156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.885343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.885542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.885590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.885751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.886001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.886048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.886237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.886488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.886538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.886769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.887000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.887027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.887228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.887461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.887511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.887739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.887957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.888004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.018 qpair failed and we were unable to recover it. 00:32:58.018 [2024-07-11 23:46:18.888224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.888437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.018 [2024-07-11 23:46:18.888486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.888688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.888931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.888980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.889194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.889362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.889409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.889608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.889824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.889872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.890082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.890261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.890289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.890486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.890736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.890784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.890992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.891204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.891234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.891479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.891709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.891756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.891955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.892112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.892152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.892377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.892623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.892671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.892890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.893072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.893099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.893326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.893561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.893610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.893802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.894002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.894028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.894215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.894383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.894430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.894641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.894839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.894887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.895078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.895254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.895301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.895525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.895744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.895794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.895999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.896239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.896287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.896502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.896660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.896708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.896928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.897145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.897172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.897356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.897577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.897626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.897848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.898068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.898095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.898312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.898517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.898568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.898778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.898957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.898984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.899199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.899418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.899476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.899670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.899851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.899900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.900078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.900260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.900306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.900529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.900779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.900833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.901042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.901267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.901313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.901508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.901750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.901799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.019 qpair failed and we were unable to recover it. 00:32:58.019 [2024-07-11 23:46:18.902010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.019 [2024-07-11 23:46:18.902206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.902255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.902484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.902734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.902783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.902990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.903214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.903249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.903478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.903699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.903746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.903956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.904175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.904203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.904368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.904566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.904613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.904811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.905010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.905037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.905250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.905477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.905531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.905716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.905932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.905986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.906222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.906463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.906524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.906745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.906940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.906967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.907157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.907341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.907369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.907579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.907777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.907825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.908008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.908157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.908184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.908369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.908615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.908664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.908890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.909091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.909119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.909336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.909500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.909549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.909773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.909986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.910033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.910254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.910471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.910519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.910745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.910992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.911041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.911254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.911456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.911505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.911710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.911927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.911975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.912188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.912357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.912402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.912631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.912884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.912933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.913118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.913340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.913367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.913563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.913774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.913823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.914011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.914226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.914253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.914459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.914715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.914760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.914972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.915163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.915194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.915398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.915618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.915668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.020 [2024-07-11 23:46:18.915880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.916080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.020 [2024-07-11 23:46:18.916107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.020 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.916277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.916517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.916567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.916753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.916955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.917002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.917204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.917464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.917519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.917734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.917979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.918025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.918212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.918447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.918507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.918719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.918922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.918972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.919167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.919387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.919434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.919656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.919900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.919948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.920153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.920359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.920386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.920543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.920760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.920809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.921013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.921218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.921246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.921467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.921665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.921713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.921930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.922130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.922165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.922348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.922566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.922618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.922853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.923063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.923091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.923315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.923544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.923595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.923815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.923987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.924015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.924197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.924381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.924427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.924655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.924887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.924937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.925150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.925335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.925362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.925564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.925805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.925855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.926064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.926272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.926300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.926492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.926722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.926774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.926955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.927158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.927190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.927440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.927676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.927726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.927877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.928084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.928112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.928335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.928521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.928570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.928764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.929007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.929055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.929266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.929481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.929535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.929724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.929924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.929971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.021 qpair failed and we were unable to recover it. 00:32:58.021 [2024-07-11 23:46:18.930155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.021 [2024-07-11 23:46:18.930347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.930393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.022 qpair failed and we were unable to recover it. 00:32:58.022 [2024-07-11 23:46:18.930604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.930840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.930890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.022 qpair failed and we were unable to recover it. 00:32:58.022 [2024-07-11 23:46:18.931100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.931295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.931324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.022 qpair failed and we were unable to recover it. 00:32:58.022 [2024-07-11 23:46:18.931505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.931670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.931720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.022 qpair failed and we were unable to recover it. 00:32:58.022 [2024-07-11 23:46:18.931944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.932152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.932180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.022 qpair failed and we were unable to recover it. 00:32:58.022 [2024-07-11 23:46:18.932392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.932590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.932640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.022 qpair failed and we were unable to recover it. 00:32:58.022 [2024-07-11 23:46:18.932866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.933095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.933122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.022 qpair failed and we were unable to recover it. 00:32:58.022 [2024-07-11 23:46:18.933318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.933511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.933538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.022 qpair failed and we were unable to recover it. 00:32:58.022 [2024-07-11 23:46:18.933744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.933977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.934025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.022 qpair failed and we were unable to recover it. 00:32:58.022 [2024-07-11 23:46:18.934250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.934401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.934446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.022 qpair failed and we were unable to recover it. 00:32:58.022 [2024-07-11 23:46:18.934676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.934924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.934973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.022 qpair failed and we were unable to recover it. 00:32:58.022 [2024-07-11 23:46:18.935196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.935398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.022 [2024-07-11 23:46:18.935442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.022 qpair failed and we were unable to recover it. 00:32:58.022 [2024-07-11 23:46:18.935646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.935893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.935943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.292 qpair failed and we were unable to recover it. 00:32:58.292 [2024-07-11 23:46:18.936094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.936301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.936329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.292 qpair failed and we were unable to recover it. 00:32:58.292 [2024-07-11 23:46:18.936531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.936754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.936804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.292 qpair failed and we were unable to recover it. 00:32:58.292 [2024-07-11 23:46:18.936988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.937160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.937188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.292 qpair failed and we were unable to recover it. 00:32:58.292 [2024-07-11 23:46:18.937377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.937530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.937558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.292 qpair failed and we were unable to recover it. 00:32:58.292 [2024-07-11 23:46:18.937745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.937925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.937953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.292 qpair failed and we were unable to recover it. 00:32:58.292 [2024-07-11 23:46:18.938166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.938386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.938432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.292 qpair failed and we were unable to recover it. 00:32:58.292 [2024-07-11 23:46:18.938646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.938830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.938882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.292 qpair failed and we were unable to recover it. 00:32:58.292 [2024-07-11 23:46:18.939084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.939264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.939294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.292 qpair failed and we were unable to recover it. 00:32:58.292 [2024-07-11 23:46:18.939459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.939677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.939728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.292 qpair failed and we were unable to recover it. 00:32:58.292 [2024-07-11 23:46:18.939923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.940128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.940173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.292 qpair failed and we were unable to recover it. 00:32:58.292 [2024-07-11 23:46:18.940355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.940571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.940623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.292 qpair failed and we were unable to recover it. 00:32:58.292 [2024-07-11 23:46:18.940817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.940999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.292 [2024-07-11 23:46:18.941027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.292 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.941242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.941459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.941508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.941712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.941928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.941978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.942166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.942364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.942415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.942643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.942881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.942929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.943123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.943320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.943349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.943546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.943723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.943773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.943991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.944211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.944239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.944398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.944624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.944673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.944892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.945090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.945117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.945342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.945569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.945620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.945830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.946007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.946035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.946200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.946378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.946424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.946617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.946823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.946880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.947060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.947242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.947290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.947440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.947629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.947680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.947877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.948106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.948133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.948338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.948584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.948633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.948849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.949054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.949081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.949288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.949531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.949584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.949775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.949979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.950006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.950198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.950413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.950469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.950664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.950905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.950956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.951137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.951336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.951364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.951528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.951738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.951787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.951965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.952159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.952192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.952376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.952563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.952614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.952823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.952980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.953007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.953166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.953336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.953381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.953547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.953767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.953815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.953998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.954199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.293 [2024-07-11 23:46:18.954244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.293 qpair failed and we were unable to recover it. 00:32:58.293 [2024-07-11 23:46:18.954453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.954638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.954696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.954913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.955116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.955152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.955352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.955560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.955610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.955803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.956013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.956041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.956246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.956445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.956508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.956714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.956906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.956954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.957166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.957385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.957429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.957638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.957844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.957895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.958086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.958238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.958266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.958448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.958667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.958718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.958913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.959129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.959174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.959353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.959555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.959604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.959786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.959994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.960044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.960266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.960457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.960523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.960730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.960957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.961006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.961209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.961457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.961517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.961736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.961960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.962008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.962221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.962432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.962493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.962683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.962899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.962947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.963122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.963350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.963396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.963573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.963770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.963818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.964016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.964209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.964266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.964431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.964688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.964736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.964932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.965166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.965196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.965355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.965573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.965623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.965830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.966046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.966073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.966284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.966497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.966557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.966766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.966981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.967008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.967218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.967428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.967493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.967725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.967975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.968025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.294 [2024-07-11 23:46:18.968209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.968417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.294 [2024-07-11 23:46:18.968477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.294 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.968706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.968923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.968973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.969158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.969381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.969428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.969603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.969834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.969883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.970091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.970284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.970313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.970537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.970748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.970799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.970954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.971166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.971194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.971387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.971634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.971683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.971906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.972133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.972168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.972381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.972563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.972610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.972795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.972951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.973002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.973276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.973503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.973554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.973717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.973962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.974017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.974213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.974463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.974522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.974711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.974903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.974952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.975135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.975353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.975403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.975568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.975762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.975818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.976004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.976201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.976264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.976528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.976738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.976788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.976998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.977200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.977252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.977419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.977613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.977661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.977881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.978111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.978145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.978320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.978587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.978638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.978878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.979078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.979105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.979298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.979508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.979558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.979756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.979965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.980019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.980181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.980387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.980434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.980709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.980967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.981014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.981210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.981455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.981511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.981705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.981923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.981972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.982169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.982350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.982377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.295 qpair failed and we were unable to recover it. 00:32:58.295 [2024-07-11 23:46:18.982596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.982849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.295 [2024-07-11 23:46:18.982898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.983105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.983342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.983370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.983573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.983869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.983920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.984100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.984264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.984292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.984467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.984696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.984745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.984920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.985126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.985170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.985355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.985590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.985638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.985790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.985967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.986023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.986195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.986413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.986456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.986652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.986846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.986894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.987086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.987265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.987312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.987493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.987706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.987756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.987983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.988201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.988235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.988424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.988641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.988690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.988896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.989097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.989124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.989303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.989490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.989517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.989721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.989907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.989934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.990091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.990279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.990308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 [2024-07-11 23:46:18.990295] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.990494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.990492] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:58.296 [2024-07-11 23:46:18.990548] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:58.296 [2024-07-11 23:46:18.990596] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:58.296 [2024-07-11 23:46:18.990695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.990746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.990849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:58.296 [2024-07-11 23:46:18.990932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.990906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:58.296 [2024-07-11 23:46:18.990967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:58.296 [2024-07-11 23:46:18.990977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:58.296 [2024-07-11 23:46:18.991119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.991157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.991392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.991591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.991642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.991832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.992030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.992057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.992251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.992497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.992546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.296 [2024-07-11 23:46:18.992762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.992940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.296 [2024-07-11 23:46:18.992968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.296 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.993130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.993342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.993370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.993553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.993760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.993813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.994019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.994197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.994233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.994458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.994646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.994695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.994904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.995113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.995158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.995344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.995561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.995613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.995785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.996005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.996033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.996238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.996463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.996513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.996741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.996959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.997011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.997198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.997431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.997489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.997732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.997931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.997958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.998114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.998302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.998350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.998563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.998791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.998841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.999015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.999224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.999272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:18.999509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.999741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:18.999791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.000004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.000203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.000237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.000525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.000752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.000804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.000956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.001164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.001192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.001437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.001663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.001712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.001891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.002072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.002099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.002376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.002599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.002652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.002867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.003065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.003092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.003290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.003584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.003633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.003814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.003993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.004020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.004279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.004525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.004576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.004764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.004962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.004989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.005169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.005418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.005464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.005674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.005888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.005939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.006155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.006308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.006354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.006540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.006746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.006802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.297 qpair failed and we were unable to recover it. 00:32:58.297 [2024-07-11 23:46:19.006968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.007185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.297 [2024-07-11 23:46:19.007231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.007438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.007631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.007678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.007904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.008106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.008134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.008367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.008612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.008663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.008893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.009120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.009155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.009363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.009548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.009597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.009797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.010079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.010106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.010327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.010526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.010575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.010805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.010996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.011024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.011242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.011474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.011525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.011752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.012000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.012049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.012239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.012485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.012536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.012764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.012993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.013021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.013236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.013511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.013559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.013774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.013956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.013983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.014204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.014455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.014508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.014702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.014900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.014950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.015128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.015371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.015417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.015636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.015848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.015898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.016107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.016337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.016384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.016613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.016834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.016886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.017096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.017287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.017333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.017507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.017795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.017846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.018053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.018203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.018232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.018456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.018710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.018759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.018955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.019111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.019145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.019357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.019597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.019646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.019859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.020033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.020060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.020264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.020454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.020510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.020692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.020906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.020958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.021146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.021305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.021333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.298 qpair failed and we were unable to recover it. 00:32:58.298 [2024-07-11 23:46:19.021515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.298 [2024-07-11 23:46:19.021723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.021773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.022001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.022238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.022267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.022445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.022654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.022705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.022899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.023097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.023124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.023340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.023563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.023611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.023831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.024002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.024029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.024234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.024438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.024495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.024690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.024900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.024957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.025170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.025408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.025469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.025657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.025919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.025970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.026188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.026428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.026497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.026732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.026982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.027034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.027243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.027471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.027521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.027744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.027978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.028028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.028240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.028437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.028493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.028714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.028961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.029012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.029231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.029477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.029529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.029747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.029977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.030004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.030164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.030366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.030412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.030644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.030894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.030950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.031134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.031379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.031423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.031648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.031901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.031951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.032155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.032369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.032396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.032588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.032845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.032896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.033115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.033306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.033334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.033555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.033775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.033827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.034043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.034254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.034284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.034484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.034739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.034790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.034999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.035212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.035261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.035490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.035721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.035771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.035963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.036167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.299 [2024-07-11 23:46:19.036196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.299 qpair failed and we were unable to recover it. 00:32:58.299 [2024-07-11 23:46:19.036433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.036702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.036753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.036971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.037212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.037240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.037429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.037636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.037686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.037866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.038045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.038073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.038270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.038467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.038526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.038742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.038946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.038973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.039167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.039385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.039430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.039640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.039868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.039919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.040128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.040357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.040384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.040609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.040854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.040902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.041115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.041282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.041309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.041484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.041702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.041752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.041979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.042220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.042249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.042438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.042650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.042702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.042899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.043110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.043161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.043359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.043573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.043628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.043853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.044077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.044104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.044292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.044506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.044558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.044741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.044987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.045039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.045263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.045438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.045494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.045729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.045985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.046037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.046265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.046516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.046566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.046763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.046964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.046992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.047183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.047396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.047442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.047640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.047895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.047946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.048161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.048337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.048389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.048591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.048840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.048891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.049106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.049294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.049321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.049522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.049751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.049801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.049972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.050127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.050172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.050389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.050591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.300 [2024-07-11 23:46:19.050639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.300 qpair failed and we were unable to recover it. 00:32:58.300 [2024-07-11 23:46:19.050856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.051067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.051094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.051302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.051556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.051605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.051797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.051973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.052000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.052203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.052462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.052518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.052727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.052957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.053009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.053259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.053504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.053552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.053794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.054061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.054088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.054304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.054505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.054556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.054787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.054972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.055004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.055266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.055544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.055596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.055921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.056243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.056271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.056512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.056865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.056916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.057157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.057405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.057432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.057746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.058200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.058232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.058440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.058723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.058783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.059090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.059378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.059406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.059763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.059996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.060047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.060274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.060536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.060584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.060886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.061174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.061207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.061430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.061690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.061739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.061983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.062217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.062245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.062493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.062792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.062844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.063087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.063346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.063374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.063617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.064006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.064054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.064343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.064644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.064694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.065004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.065254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.065282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.301 [2024-07-11 23:46:19.065521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.065790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.301 [2024-07-11 23:46:19.065838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.301 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.066067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.066306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.066335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.066611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.066819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.066869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.067137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.067397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.067424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.067762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.068167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.068224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.068453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.068751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.068800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.069028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.069289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.069317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.069733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.070023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.070075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.070275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.070557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.070617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.071024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.071269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.071297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.071555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.071798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.071847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.072121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.072373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.072401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.072636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.072868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.072920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.073153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.073377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.073407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.073801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.074078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.074128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.074398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.074759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.074812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.075217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.075419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.075447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.075709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.075974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.076023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.076234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.076469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.076497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.076736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.076995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.077046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.077331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.077563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.077611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.077849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.078084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.078111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.078379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.078616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.078665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.078903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.079137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.079173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.079399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.079647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.079696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.079973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.080214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.080243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.080446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.080654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.080705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.080967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.081280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.081308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.081517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.081798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.081853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.082101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.082358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.082388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.082564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.082741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.302 [2024-07-11 23:46:19.082792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.302 qpair failed and we were unable to recover it. 00:32:58.302 [2024-07-11 23:46:19.083061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.083338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.083367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.083613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.083921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.083969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.084215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.084481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.084539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.084819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.085212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.085240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.085443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.085667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.085718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.085975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.086224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.086254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.086510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.086841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.086890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.087161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.087389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.087416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.087621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.087900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.087951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.088214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.088433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.088460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.088673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.088960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.089010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.089257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.089467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.089520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.089741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.090034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.090087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.090294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.090499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.090548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.090814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.091131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.091215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.091466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.091725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.091773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.092075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.092359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.092388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.092671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.092902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.092951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.093215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.093448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.093475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.093708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.093925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.093977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.094201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.094437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.094464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.094715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.095031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.095085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.095349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.095623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.095673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.095944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.096181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.096209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.096421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.096695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.096742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.097025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.097266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.097295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.097553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.097849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.097908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.098177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.098406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.098433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.098722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.099013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.099063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.099352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.099602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.099646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.099937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.100158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.303 [2024-07-11 23:46:19.100186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.303 qpair failed and we were unable to recover it. 00:32:58.303 [2024-07-11 23:46:19.100435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.100662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.100711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.100980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.101238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.101266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.101528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.101788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.101837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.102183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.102409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.102436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.102635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.102900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.102963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.103258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.103532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.103592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.103844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.104090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.104118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.104402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.104651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.104699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.104900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.105075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.105102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.105362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.105629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.105688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.105977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.106276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.106306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.106607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.106871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.106920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.107173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.107411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.107438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.107701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.107981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.108031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.108281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.108501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.108553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.108829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.109240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.109268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.109506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.109733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.109783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.110024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.110267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.110296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.110514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.110823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.110876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.111068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.111272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.111300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.111572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.111821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.111872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.112179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.112422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.112450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.112689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.113014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.113065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.113310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.113586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.113636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.113880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.114101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.114128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.114402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.114618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.114667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.114972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.115295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.115323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.115576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.115910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.115957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.116158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.116414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.116441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.116719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.116996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.117047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.304 [2024-07-11 23:46:19.117327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.117615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.304 [2024-07-11 23:46:19.117665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.304 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.117922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.118114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.118150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.118388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.118611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.118667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.118966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.119266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.119294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.119561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.119797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.119848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.120083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.120330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.120358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.120606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.120876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.120924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.121211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.121491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.121521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.121785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.122085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.122135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.122428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.122657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.122708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.122945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.123169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.123197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.123401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.123662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.123710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.123983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.124207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.124235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.124482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.124787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.124838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.125121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.125348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.125376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.125615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.125822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.125873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.126169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.126517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.126588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.126913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.127171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.127200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.127451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.127715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.127765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.128010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.128293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.128322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.128560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.128820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.128871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.129073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.129297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.129325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.129614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.129876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.129927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.130204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.130425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.130452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.130699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.130979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.131030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.131273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.131501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.131550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.131792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.132059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.132111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.305 qpair failed and we were unable to recover it. 00:32:58.305 [2024-07-11 23:46:19.132379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.305 [2024-07-11 23:46:19.132654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.132702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.132933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.133209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.133236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.133434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.133699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.133749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.134015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.134199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.134228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.134481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.134776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.134829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.135096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.135382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.135410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.135683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.136019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.136066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.136253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.136503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.136551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.136804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.137043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.137088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.137368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.137606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.137655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.137908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.138236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.138266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.138489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.138716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.138765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.139032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.139337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.139364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.139622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.139907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.139968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.140253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.140619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.140663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.140923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.141165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.141195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.141397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.141660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.141711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.141999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.142285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.142315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.142567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.142863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.142923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.143202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.143448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.143475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.143728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.144038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.144089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.144316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.144536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.144589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.144863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.145152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.145180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.145424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.145687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.145739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.145983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.146283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.146313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.146626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.146920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.146970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.147252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.147450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.147483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.147702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.147938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.147987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.148200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.148465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.148523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.148756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.149021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.149075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.149337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.149615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.149669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.306 qpair failed and we were unable to recover it. 00:32:58.306 [2024-07-11 23:46:19.149939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.306 [2024-07-11 23:46:19.150219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.150248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.150508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.150819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.150868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.151104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.151473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.151553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.151820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.152171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.152210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.152554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.152828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.152881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.153089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.153280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.153328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.153582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.153873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.153930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.154213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.154475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.154503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.154788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.155072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.155125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.155369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.155609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.155655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.155896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.156199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.156227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.156503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.156761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.156812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.157087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.157297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.157328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.157518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.157803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.157857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.158117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.158313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.158341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.158626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.158950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.158999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.159262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.159442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.159500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.159749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.160027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.160075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.160352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.160618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.160681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.160940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.161238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.161266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.161484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.161727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.161777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.162002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.162260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.162290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.162524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.162754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.162803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.163001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.163188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.163216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.163425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.163669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.163712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.163966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.164279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.164326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.164557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.164816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.164865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.165108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.165380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.165407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.165664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.165871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.165922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.166199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.166473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.166521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.166754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.167036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.167087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.307 qpair failed and we were unable to recover it. 00:32:58.307 [2024-07-11 23:46:19.167377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.307 [2024-07-11 23:46:19.167641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.167691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.168007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.168288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.168317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.168584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.168882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.168933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.169146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.169382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.169409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.169651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.169932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.169990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.170248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.170518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.170573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.170847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.171081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.171108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.171337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.171585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.171636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.171874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.172128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.172165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.172446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.172629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.172678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.172906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.173213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.173241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.173490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.173745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.173793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.174059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.174322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.174351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.174637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.174858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.174908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.175169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.175384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.175411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.175665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.175961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.176020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.176290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.176515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.176566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.176768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.177026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.177074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.177301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.177524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.177575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.177821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.178086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.178113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.178379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.178604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.178652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.178848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.179079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.179106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.179390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.179648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.179698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.179978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.180287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.180314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.180582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.180899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.180953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.181228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.181464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.181496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.181745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.182000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.182050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.182296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.182562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.182613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.182889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.183160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.183188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.183438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.183741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.183788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.184053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.184371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.308 [2024-07-11 23:46:19.184401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.308 qpair failed and we were unable to recover it. 00:32:58.308 [2024-07-11 23:46:19.184629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.184895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.184946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.185208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.185474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.185501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.185773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.186051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.186110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.186402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.186683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.186734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.186983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.187234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.187262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.187463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.187719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.187769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.187965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.188199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.188227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.188462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.188663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.188715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.188877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.189098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.189125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.189331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.189551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.189600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.189771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.189995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.190044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.190257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.190500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.190550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.190742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.190964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.191016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.191252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.191488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.191540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.191726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.191944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.192004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.192221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.192408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.192454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.192652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.192852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.192903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.193132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.193369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.193418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.193638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.193841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.193892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.194100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.194338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.194386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.194550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.194766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.194818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.194996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.195209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.195237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.195421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.195627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.195678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.195904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.196147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.196175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.196379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.196647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.196697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.196898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.197150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.197178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.309 [2024-07-11 23:46:19.197390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.197587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.309 [2024-07-11 23:46:19.197636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.309 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.197855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.198076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.198103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.198294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.198505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.198556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.198793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.199037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.199089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.199274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.199465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.199515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.199722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.199967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.200017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.200213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.200436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.200492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.200696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.200890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.200940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.201097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.201294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.201339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.201526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.201768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.201818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.202003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.202161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.202191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.202440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.202680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.202731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.202920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.203077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.203105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.203304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.203520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.203574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.203769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.203993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.204019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.204225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.204435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.204493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.204689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.204921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.204969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.205159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.205358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.205386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.205558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.205765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.205831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.206019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.206202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.206236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.206455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.206667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.206716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.206963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.207181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.207232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.207466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.207666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.207710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.207939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.208176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.208204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.208402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.208595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.208644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.208865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.209129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.209164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.209377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.209645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.209706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.209908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.210157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.210189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.210404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.210585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.210635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.210839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.211046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.211073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.211303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.211573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.211622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.310 [2024-07-11 23:46:19.211853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.212071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.310 [2024-07-11 23:46:19.212098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.310 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.212293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.212501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.212555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.212749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.212971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.213021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.213228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.213492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.213544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.213738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.213940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.213967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.214180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.214384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.214430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.214664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.214898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.214949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.215131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.215354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.215382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.215605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.215852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.215903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.216151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.216338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.216365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.216622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.216812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.216862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.217045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.217228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.217257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.217455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.217703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.217752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.217967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.218158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.218195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.218444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.218670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.218722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.218905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.219079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.219106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.219341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.219541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.219597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.219782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.219946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.219998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.220231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.220460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.220516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.220787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.220958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.220985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.221243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.221521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.221548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.221758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.221941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.221967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.222158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.222348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.222395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.222608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.222806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.222857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.223048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.223270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.223318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.223503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.223664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.223715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.223954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.224196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.224224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.224441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.224667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.224717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.224939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.225147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.225175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.225364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.225600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.225655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.225885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.226095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.226123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.311 qpair failed and we were unable to recover it. 00:32:58.311 [2024-07-11 23:46:19.226327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.311 [2024-07-11 23:46:19.226554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 [2024-07-11 23:46:19.226605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.312 qpair failed and we were unable to recover it. 00:32:58.312 [2024-07-11 23:46:19.226790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 [2024-07-11 23:46:19.227012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 [2024-07-11 23:46:19.227065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.312 qpair failed and we were unable to recover it. 00:32:58.312 [2024-07-11 23:46:19.227288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 [2024-07-11 23:46:19.227469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 [2024-07-11 23:46:19.227526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.312 qpair failed and we were unable to recover it. 00:32:58.312 [2024-07-11 23:46:19.227756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 [2024-07-11 23:46:19.227987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 [2024-07-11 23:46:19.228040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.312 qpair failed and we were unable to recover it. 00:32:58.312 [2024-07-11 23:46:19.228270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 [2024-07-11 23:46:19.228502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 [2024-07-11 23:46:19.228563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.312 qpair failed and we were unable to recover it. 00:32:58.312 23:46:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:58.312 [2024-07-11 23:46:19.228783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 23:46:19 -- common/autotest_common.sh@852 -- # return 0 00:32:58.312 [2024-07-11 23:46:19.228995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 [2024-07-11 23:46:19.229023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.312 qpair failed and we were unable to recover it. 00:32:58.312 23:46:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:58.312 23:46:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:58.312 [2024-07-11 23:46:19.229289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 23:46:19 -- common/autotest_common.sh@10 -- # set +x 00:32:58.312 [2024-07-11 23:46:19.229498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 [2024-07-11 23:46:19.229549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.312 qpair failed and we were unable to recover it. 00:32:58.312 [2024-07-11 23:46:19.229764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 [2024-07-11 23:46:19.230061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.312 [2024-07-11 23:46:19.230089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.312 qpair failed and we were unable to recover it. 00:32:58.312 [2024-07-11 23:46:19.230288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.574 [2024-07-11 23:46:19.230528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.574 [2024-07-11 23:46:19.230580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.574 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.230820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.231103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.231130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.231334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.231539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.231567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.231787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.231994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.232023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.232192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.232378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.232422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.232626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.232855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.232904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.233130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.233348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.233376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.233572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.233832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.233887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.234075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.234239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.234267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.234476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.234711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.234763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.234961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.235126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.235161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.235339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.235564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.235622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.235833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.236051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.236078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.236244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.236441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.236495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.236679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.236847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.236908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.237092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.237268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.237297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.237467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.237668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.237719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.237949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.238167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.238195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.238360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.238553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.238603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.238759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.238994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.239021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.239199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.239413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.239471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.239680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.239903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.239953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.240132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.240346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.240374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.240572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.240830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.240882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.241080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.241267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.241295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.241498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.241761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.241810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.242033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.242213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.242241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.242484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.242726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.242775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.242961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.243213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.243242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.243463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.243735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.243784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.244069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.244224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.244253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.575 qpair failed and we were unable to recover it. 00:32:58.575 [2024-07-11 23:46:19.244471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.575 [2024-07-11 23:46:19.244829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.244879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.245099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.245304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.245333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.245526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.245792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.245845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.246097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.246288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.246316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.246592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.246940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.246993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.247235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.247417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.247474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.247729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.247956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.248006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.248217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.248426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.248492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.248736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.248977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.249029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.249275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.249492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.249556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.249794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.250039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.250067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.250277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.250524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.250575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.250791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.251010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.251037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 23:46:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:58.576 [2024-07-11 23:46:19.251242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 23:46:19 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:58.576 [2024-07-11 23:46:19.251444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 23:46:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.576 [2024-07-11 23:46:19.251502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 23:46:19 -- common/autotest_common.sh@10 -- # set +x 00:32:58.576 [2024-07-11 23:46:19.251835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.252104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.252131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.252317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.252490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.252554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.252784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.253011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.253038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.253276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.253464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.253523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.253776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.254032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.254059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.254303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.254497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.254548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.254907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.255159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.255193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.255373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.255592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.255643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.255894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.256224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.256252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.256462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.256743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.256798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.257045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.257275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.257304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.257545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.257841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.257892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.258087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.258294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.258321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.258557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.258804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.258853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.259084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.259301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.259329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.259547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.259821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.576 [2024-07-11 23:46:19.259880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.576 qpair failed and we were unable to recover it. 00:32:58.576 [2024-07-11 23:46:19.260190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.260397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.260425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.260749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.261002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.261054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.261255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.261449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.261502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.261735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.262001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.262059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.262308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.262551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.262601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.262881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.263193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.263222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.263439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.263686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.263734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.263944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.264146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.264174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.264330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.264608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.264667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.264961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.265170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.265208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.265376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.265593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.265645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.265921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.266195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.266223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.266413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.266633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.266684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.266960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.267116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.267152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.267394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.267571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.267621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.267834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.268097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.268124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.268321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.268571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.268624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.268935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.269240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.269269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.269521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.269720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.269772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.270000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.270249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.270281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.270547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.270812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.270866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.271066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.271279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.271307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.271540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.271805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.271855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.272153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.272381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.272408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.272634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.272918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.272969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.273257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.273447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.273474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.273720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.273973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.274033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.274286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.274474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.274526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.274806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.275102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.275129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.275370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.275657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.275711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.577 qpair failed and we were unable to recover it. 00:32:58.577 [2024-07-11 23:46:19.275945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.577 [2024-07-11 23:46:19.276193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.276222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.276411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.276659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.276709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.277011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.277239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.277268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.277477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.277714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.277763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.278056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.278294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.278322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.278507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.278788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.278840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.279069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.279305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.279333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.279598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.279817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.279865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.280153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.280358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.280397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.280607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.280871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.280921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.281151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.281334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.281361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.281604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.281864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.281918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.282185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.282415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.282442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.282668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.282962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.282988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 Malloc0 00:32:58.578 [2024-07-11 23:46:19.283234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.283405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.283451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 23:46:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.578 [2024-07-11 23:46:19.283661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.283903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.283949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 wit 23:46:19 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:58.578 h addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 23:46:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.578 23:46:19 -- common/autotest_common.sh@10 -- # set +x 00:32:58.578 [2024-07-11 23:46:19.284250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.284512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.284540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.284746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.284950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.284977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.285176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.285423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.285451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.285692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.285895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.285923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.286134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.286355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.286382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.286660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.286737] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.578 [2024-07-11 23:46:19.286912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.286961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.287208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.287419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.287446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.287720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.287988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.288036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.288259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.288481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.288534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.288778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.289059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.289111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.289365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.289572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.289624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.289837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.290064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.290090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.290331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.290584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.290636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.290864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.291124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.578 [2024-07-11 23:46:19.291161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.578 qpair failed and we were unable to recover it. 00:32:58.578 [2024-07-11 23:46:19.291400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.291650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.291700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.291993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.292249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.292277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.292588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.292789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.292839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.293080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.293378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.293409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.293642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.293893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.293943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.294189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.294446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.294473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.294694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.294977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.295031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.295226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.295485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.295535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.295768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.295999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.296026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 23:46:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.579 23:46:19 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:58.579 [2024-07-11 23:46:19.296321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 23:46:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.579 23:46:19 -- common/autotest_common.sh@10 -- # set +x 00:32:58.579 [2024-07-11 23:46:19.296621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.296671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.296911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.297136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.297177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.297454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.297758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.297786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.298009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.298296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.298324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.298539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.298778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.298822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.299067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.299358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.299385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.299602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.299903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.299963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.300240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.300453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.300480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.300729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.301022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.301075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.301346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.301554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.301606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.301863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.302119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.302153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.302341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.302581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.302631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.302880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.303064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.303090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.303333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.303554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.303604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 [2024-07-11 23:46:19.303815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 23:46:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.579 [2024-07-11 23:46:19.304071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.579 [2024-07-11 23:46:19.304099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.579 qpair failed and we were unable to recover it. 00:32:58.579 23:46:19 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:58.579 [2024-07-11 23:46:19.304337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 23:46:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.580 [2024-07-11 23:46:19.304593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 23:46:19 -- common/autotest_common.sh@10 -- # set +x 00:32:58.580 [2024-07-11 23:46:19.304643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.304909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.305123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.305165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.305439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.305658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.305685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.305933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.306219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.306247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.306453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.306698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.306749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.307062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.307282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.307311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.307544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.307835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.307899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.308170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.308427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.308454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.308664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.308923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.308971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.309206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.309422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.309449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.309770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.310018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.310069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.310276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.310554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.310608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.310875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.311122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.311157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.311403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.311685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.311751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.312061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 23:46:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.580 [2024-07-11 23:46:19.312293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.312325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 23:46:19 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:58.580 [2024-07-11 23:46:19.312594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 23:46:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.580 [2024-07-11 23:46:19.312914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.312971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 23:46:19 -- common/autotest_common.sh@10 -- # set +x 00:32:58.580 [2024-07-11 23:46:19.313203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.313439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.313466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.313720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.313951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.313978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.314251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.314495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.314545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.314816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.315127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.315162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.315414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.315649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.315698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5ff50 with addr=10.0.0.2, port=4420 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.315975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.580 [2024-07-11 23:46:19.316261] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.580 [2024-07-11 23:46:19.319428] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:32:58.580 [2024-07-11 23:46:19.319496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5ff50 (107): Transport endpoint is not connected 00:32:58.580 [2024-07-11 23:46:19.319582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 23:46:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.580 23:46:19 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:58.580 23:46:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.580 23:46:19 -- common/autotest_common.sh@10 -- # set +x 00:32:58.580 [2024-07-11 23:46:19.327601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.580 [2024-07-11 23:46:19.327789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.580 [2024-07-11 23:46:19.327825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.580 [2024-07-11 23:46:19.327843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.580 [2024-07-11 23:46:19.327855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.580 [2024-07-11 23:46:19.327887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 23:46:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.580 23:46:19 -- host/target_disconnect.sh@58 -- # wait 391350 00:32:58.580 [2024-07-11 23:46:19.337552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.580 [2024-07-11 23:46:19.337717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.580 [2024-07-11 23:46:19.337746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.580 [2024-07-11 23:46:19.337761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.580 [2024-07-11 23:46:19.337774] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.580 [2024-07-11 23:46:19.337805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.580 qpair failed and we were unable to recover it. 00:32:58.580 [2024-07-11 23:46:19.347434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.580 [2024-07-11 23:46:19.347600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.580 [2024-07-11 23:46:19.347630] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.580 [2024-07-11 23:46:19.347645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.580 [2024-07-11 23:46:19.347658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.347689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.357499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.581 [2024-07-11 23:46:19.357654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.581 [2024-07-11 23:46:19.357685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.581 [2024-07-11 23:46:19.357701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.581 [2024-07-11 23:46:19.357714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.357745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.367525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.581 [2024-07-11 23:46:19.367686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.581 [2024-07-11 23:46:19.367715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.581 [2024-07-11 23:46:19.367730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.581 [2024-07-11 23:46:19.367749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.367780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.377588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.581 [2024-07-11 23:46:19.377778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.581 [2024-07-11 23:46:19.377807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.581 [2024-07-11 23:46:19.377823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.581 [2024-07-11 23:46:19.377836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.377866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.387572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.581 [2024-07-11 23:46:19.387751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.581 [2024-07-11 23:46:19.387779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.581 [2024-07-11 23:46:19.387795] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.581 [2024-07-11 23:46:19.387808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.387839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.397607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.581 [2024-07-11 23:46:19.397758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.581 [2024-07-11 23:46:19.397787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.581 [2024-07-11 23:46:19.397803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.581 [2024-07-11 23:46:19.397816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.397847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.407655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.581 [2024-07-11 23:46:19.407864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.581 [2024-07-11 23:46:19.407894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.581 [2024-07-11 23:46:19.407910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.581 [2024-07-11 23:46:19.407923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.407954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.417651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.581 [2024-07-11 23:46:19.417804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.581 [2024-07-11 23:46:19.417833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.581 [2024-07-11 23:46:19.417848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.581 [2024-07-11 23:46:19.417861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.417892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.427638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.581 [2024-07-11 23:46:19.427812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.581 [2024-07-11 23:46:19.427841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.581 [2024-07-11 23:46:19.427856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.581 [2024-07-11 23:46:19.427869] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.427900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.437684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.581 [2024-07-11 23:46:19.437867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.581 [2024-07-11 23:46:19.437897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.581 [2024-07-11 23:46:19.437912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.581 [2024-07-11 23:46:19.437925] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.437955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.447743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.581 [2024-07-11 23:46:19.447898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.581 [2024-07-11 23:46:19.447927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.581 [2024-07-11 23:46:19.447943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.581 [2024-07-11 23:46:19.447956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.447987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.457751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.581 [2024-07-11 23:46:19.457934] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.581 [2024-07-11 23:46:19.457963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.581 [2024-07-11 23:46:19.457984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.581 [2024-07-11 23:46:19.457998] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.458030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.467769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.581 [2024-07-11 23:46:19.467948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.581 [2024-07-11 23:46:19.467977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.581 [2024-07-11 23:46:19.467992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.581 [2024-07-11 23:46:19.468006] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.468036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.477845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.581 [2024-07-11 23:46:19.478037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.581 [2024-07-11 23:46:19.478065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.581 [2024-07-11 23:46:19.478081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.581 [2024-07-11 23:46:19.478095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.478125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.487871] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.581 [2024-07-11 23:46:19.488022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.581 [2024-07-11 23:46:19.488050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.581 [2024-07-11 23:46:19.488065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.581 [2024-07-11 23:46:19.488078] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.581 [2024-07-11 23:46:19.488109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.581 qpair failed and we were unable to recover it. 00:32:58.581 [2024-07-11 23:46:19.497885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.582 [2024-07-11 23:46:19.498034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.582 [2024-07-11 23:46:19.498063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.582 [2024-07-11 23:46:19.498078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.582 [2024-07-11 23:46:19.498091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.582 [2024-07-11 23:46:19.498121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.582 qpair failed and we were unable to recover it. 00:32:58.582 [2024-07-11 23:46:19.507878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.582 [2024-07-11 23:46:19.508038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.582 [2024-07-11 23:46:19.508068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.582 [2024-07-11 23:46:19.508084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.582 [2024-07-11 23:46:19.508096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.582 [2024-07-11 23:46:19.508127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.582 qpair failed and we were unable to recover it. 00:32:58.582 [2024-07-11 23:46:19.517939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.582 [2024-07-11 23:46:19.518091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.582 [2024-07-11 23:46:19.518119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.582 [2024-07-11 23:46:19.518134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.582 [2024-07-11 23:46:19.518157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.582 [2024-07-11 23:46:19.518189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.582 qpair failed and we were unable to recover it. 00:32:58.840 [2024-07-11 23:46:19.527948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.841 [2024-07-11 23:46:19.528110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.841 [2024-07-11 23:46:19.528146] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.841 [2024-07-11 23:46:19.528165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.841 [2024-07-11 23:46:19.528179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.841 [2024-07-11 23:46:19.528210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.841 qpair failed and we were unable to recover it. 00:32:58.841 [2024-07-11 23:46:19.537964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.841 [2024-07-11 23:46:19.538115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.841 [2024-07-11 23:46:19.538153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.841 [2024-07-11 23:46:19.538170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.841 [2024-07-11 23:46:19.538183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.841 [2024-07-11 23:46:19.538215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.841 qpair failed and we were unable to recover it. 00:32:58.841 [2024-07-11 23:46:19.548009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.841 [2024-07-11 23:46:19.548170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.841 [2024-07-11 23:46:19.548199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.841 [2024-07-11 23:46:19.548221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.841 [2024-07-11 23:46:19.548235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.841 [2024-07-11 23:46:19.548266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.841 qpair failed and we were unable to recover it. 00:32:58.841 [2024-07-11 23:46:19.558081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.841 [2024-07-11 23:46:19.558254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.841 [2024-07-11 23:46:19.558282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.841 [2024-07-11 23:46:19.558297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.841 [2024-07-11 23:46:19.558310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.841 [2024-07-11 23:46:19.558340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.841 qpair failed and we were unable to recover it. 00:32:58.841 [2024-07-11 23:46:19.568071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.841 [2024-07-11 23:46:19.568242] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.841 [2024-07-11 23:46:19.568273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.841 [2024-07-11 23:46:19.568289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.841 [2024-07-11 23:46:19.568302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.841 [2024-07-11 23:46:19.568334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.841 qpair failed and we were unable to recover it. 00:32:58.841 [2024-07-11 23:46:19.578088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.841 [2024-07-11 23:46:19.578249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.841 [2024-07-11 23:46:19.578278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.841 [2024-07-11 23:46:19.578294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.841 [2024-07-11 23:46:19.578308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.841 [2024-07-11 23:46:19.578340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.841 qpair failed and we were unable to recover it. 00:32:58.841 [2024-07-11 23:46:19.588108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.841 [2024-07-11 23:46:19.588270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.841 [2024-07-11 23:46:19.588299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.841 [2024-07-11 23:46:19.588314] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.841 [2024-07-11 23:46:19.588327] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.841 [2024-07-11 23:46:19.588358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.841 qpair failed and we were unable to recover it. 00:32:58.841 [2024-07-11 23:46:19.598181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.841 [2024-07-11 23:46:19.598362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.841 [2024-07-11 23:46:19.598391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.841 [2024-07-11 23:46:19.598406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.841 [2024-07-11 23:46:19.598419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.841 [2024-07-11 23:46:19.598450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.841 qpair failed and we were unable to recover it. 00:32:58.841 [2024-07-11 23:46:19.608205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.841 [2024-07-11 23:46:19.608364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.842 [2024-07-11 23:46:19.608400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.842 [2024-07-11 23:46:19.608415] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.842 [2024-07-11 23:46:19.608428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.842 [2024-07-11 23:46:19.608460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.842 qpair failed and we were unable to recover it. 00:32:58.842 [2024-07-11 23:46:19.618233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.842 [2024-07-11 23:46:19.618382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.842 [2024-07-11 23:46:19.618410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.842 [2024-07-11 23:46:19.618425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.842 [2024-07-11 23:46:19.618439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.842 [2024-07-11 23:46:19.618470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.842 qpair failed and we were unable to recover it. 00:32:58.842 [2024-07-11 23:46:19.628258] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.842 [2024-07-11 23:46:19.628433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.842 [2024-07-11 23:46:19.628462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.842 [2024-07-11 23:46:19.628478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.842 [2024-07-11 23:46:19.628491] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.842 [2024-07-11 23:46:19.628522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.842 qpair failed and we were unable to recover it. 00:32:58.842 [2024-07-11 23:46:19.638277] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.842 [2024-07-11 23:46:19.638446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.842 [2024-07-11 23:46:19.638474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.842 [2024-07-11 23:46:19.638495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.842 [2024-07-11 23:46:19.638509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.842 [2024-07-11 23:46:19.638540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.842 qpair failed and we were unable to recover it. 00:32:58.842 [2024-07-11 23:46:19.648366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.842 [2024-07-11 23:46:19.648547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.842 [2024-07-11 23:46:19.648576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.842 [2024-07-11 23:46:19.648591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.842 [2024-07-11 23:46:19.648604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.842 [2024-07-11 23:46:19.648635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.842 qpair failed and we were unable to recover it. 00:32:58.842 [2024-07-11 23:46:19.658358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.842 [2024-07-11 23:46:19.658548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.842 [2024-07-11 23:46:19.658576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.842 [2024-07-11 23:46:19.658591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.842 [2024-07-11 23:46:19.658605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.842 [2024-07-11 23:46:19.658635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.842 qpair failed and we were unable to recover it. 00:32:58.842 [2024-07-11 23:46:19.668352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.842 [2024-07-11 23:46:19.668510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.842 [2024-07-11 23:46:19.668539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.842 [2024-07-11 23:46:19.668554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.842 [2024-07-11 23:46:19.668567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.842 [2024-07-11 23:46:19.668598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.842 qpair failed and we were unable to recover it. 00:32:58.842 [2024-07-11 23:46:19.678387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.842 [2024-07-11 23:46:19.678559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.842 [2024-07-11 23:46:19.678595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.842 [2024-07-11 23:46:19.678610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.842 [2024-07-11 23:46:19.678623] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.842 [2024-07-11 23:46:19.678662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.843 qpair failed and we were unable to recover it. 00:32:58.843 [2024-07-11 23:46:19.688383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.843 [2024-07-11 23:46:19.688570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.843 [2024-07-11 23:46:19.688600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.843 [2024-07-11 23:46:19.688615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.843 [2024-07-11 23:46:19.688629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.843 [2024-07-11 23:46:19.688660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.843 qpair failed and we were unable to recover it. 00:32:58.843 [2024-07-11 23:46:19.698450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.843 [2024-07-11 23:46:19.698598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.843 [2024-07-11 23:46:19.698627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.843 [2024-07-11 23:46:19.698643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.843 [2024-07-11 23:46:19.698657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.843 [2024-07-11 23:46:19.698688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.843 qpair failed and we were unable to recover it. 00:32:58.843 [2024-07-11 23:46:19.708452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.843 [2024-07-11 23:46:19.708628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.843 [2024-07-11 23:46:19.708657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.843 [2024-07-11 23:46:19.708672] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.843 [2024-07-11 23:46:19.708685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.843 [2024-07-11 23:46:19.708715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.843 qpair failed and we were unable to recover it. 00:32:58.843 [2024-07-11 23:46:19.718487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.843 [2024-07-11 23:46:19.718651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.843 [2024-07-11 23:46:19.718679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.843 [2024-07-11 23:46:19.718695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.843 [2024-07-11 23:46:19.718708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.843 [2024-07-11 23:46:19.718739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.843 qpair failed and we were unable to recover it. 00:32:58.843 [2024-07-11 23:46:19.728499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.843 [2024-07-11 23:46:19.728656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.843 [2024-07-11 23:46:19.728685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.843 [2024-07-11 23:46:19.728706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.843 [2024-07-11 23:46:19.728720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.843 [2024-07-11 23:46:19.728751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.843 qpair failed and we were unable to recover it. 00:32:58.843 [2024-07-11 23:46:19.738571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.843 [2024-07-11 23:46:19.738747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.843 [2024-07-11 23:46:19.738775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.843 [2024-07-11 23:46:19.738790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.843 [2024-07-11 23:46:19.738804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.843 [2024-07-11 23:46:19.738834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.843 qpair failed and we were unable to recover it. 00:32:58.843 [2024-07-11 23:46:19.748619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.843 [2024-07-11 23:46:19.748803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.843 [2024-07-11 23:46:19.748833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.843 [2024-07-11 23:46:19.748849] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.843 [2024-07-11 23:46:19.748862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.843 [2024-07-11 23:46:19.748894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.843 qpair failed and we were unable to recover it. 00:32:58.843 [2024-07-11 23:46:19.758631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.843 [2024-07-11 23:46:19.758802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.843 [2024-07-11 23:46:19.758831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.843 [2024-07-11 23:46:19.758846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.843 [2024-07-11 23:46:19.758860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.843 [2024-07-11 23:46:19.758891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.844 qpair failed and we were unable to recover it. 00:32:58.844 [2024-07-11 23:46:19.768670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.844 [2024-07-11 23:46:19.768910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.844 [2024-07-11 23:46:19.768939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.844 [2024-07-11 23:46:19.768955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.844 [2024-07-11 23:46:19.768968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.844 [2024-07-11 23:46:19.768999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.844 qpair failed and we were unable to recover it. 00:32:58.844 [2024-07-11 23:46:19.778671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.844 [2024-07-11 23:46:19.778823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.844 [2024-07-11 23:46:19.778853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.844 [2024-07-11 23:46:19.778868] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.844 [2024-07-11 23:46:19.778881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.844 [2024-07-11 23:46:19.778913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.844 qpair failed and we were unable to recover it. 00:32:58.844 [2024-07-11 23:46:19.788741] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.844 [2024-07-11 23:46:19.788962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.844 [2024-07-11 23:46:19.788991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.844 [2024-07-11 23:46:19.789006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.844 [2024-07-11 23:46:19.789020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:58.844 [2024-07-11 23:46:19.789051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:58.844 qpair failed and we were unable to recover it. 00:32:59.104 [2024-07-11 23:46:19.798702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.104 [2024-07-11 23:46:19.798851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.104 [2024-07-11 23:46:19.798880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.104 [2024-07-11 23:46:19.798896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.104 [2024-07-11 23:46:19.798910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.104 [2024-07-11 23:46:19.798941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.104 qpair failed and we were unable to recover it. 00:32:59.104 [2024-07-11 23:46:19.808789] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.104 [2024-07-11 23:46:19.808977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.104 [2024-07-11 23:46:19.809006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.104 [2024-07-11 23:46:19.809023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.104 [2024-07-11 23:46:19.809036] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.104 [2024-07-11 23:46:19.809067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.104 qpair failed and we were unable to recover it. 00:32:59.104 [2024-07-11 23:46:19.818759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.104 [2024-07-11 23:46:19.818929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.104 [2024-07-11 23:46:19.818963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.104 [2024-07-11 23:46:19.818980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.104 [2024-07-11 23:46:19.818994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.104 [2024-07-11 23:46:19.819025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.104 qpair failed and we were unable to recover it. 00:32:59.104 [2024-07-11 23:46:19.828803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.104 [2024-07-11 23:46:19.828987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.104 [2024-07-11 23:46:19.829016] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.104 [2024-07-11 23:46:19.829032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.104 [2024-07-11 23:46:19.829045] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.104 [2024-07-11 23:46:19.829076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.104 qpair failed and we were unable to recover it. 00:32:59.104 [2024-07-11 23:46:19.838845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.104 [2024-07-11 23:46:19.838997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.104 [2024-07-11 23:46:19.839025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.104 [2024-07-11 23:46:19.839041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.104 [2024-07-11 23:46:19.839054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.104 [2024-07-11 23:46:19.839084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.104 qpair failed and we were unable to recover it. 00:32:59.104 [2024-07-11 23:46:19.848985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.104 [2024-07-11 23:46:19.849234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.104 [2024-07-11 23:46:19.849264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.104 [2024-07-11 23:46:19.849279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.104 [2024-07-11 23:46:19.849292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.104 [2024-07-11 23:46:19.849324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.104 qpair failed and we were unable to recover it. 00:32:59.104 [2024-07-11 23:46:19.858865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.104 [2024-07-11 23:46:19.859041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.104 [2024-07-11 23:46:19.859069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.104 [2024-07-11 23:46:19.859084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.104 [2024-07-11 23:46:19.859097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.104 [2024-07-11 23:46:19.859128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.104 qpair failed and we were unable to recover it. 00:32:59.104 [2024-07-11 23:46:19.868956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.104 [2024-07-11 23:46:19.869112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.104 [2024-07-11 23:46:19.869151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.104 [2024-07-11 23:46:19.869170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.104 [2024-07-11 23:46:19.869183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.104 [2024-07-11 23:46:19.869216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.104 qpair failed and we were unable to recover it. 00:32:59.104 [2024-07-11 23:46:19.879069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.104 [2024-07-11 23:46:19.879241] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.104 [2024-07-11 23:46:19.879271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.104 [2024-07-11 23:46:19.879287] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.104 [2024-07-11 23:46:19.879300] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.104 [2024-07-11 23:46:19.879332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.104 qpair failed and we were unable to recover it. 00:32:59.104 [2024-07-11 23:46:19.888997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.104 [2024-07-11 23:46:19.889179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.104 [2024-07-11 23:46:19.889209] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.104 [2024-07-11 23:46:19.889224] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.104 [2024-07-11 23:46:19.889238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.104 [2024-07-11 23:46:19.889269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.104 qpair failed and we were unable to recover it. 00:32:59.104 [2024-07-11 23:46:19.899014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.104 [2024-07-11 23:46:19.899183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.104 [2024-07-11 23:46:19.899213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.104 [2024-07-11 23:46:19.899228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.104 [2024-07-11 23:46:19.899242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.104 [2024-07-11 23:46:19.899273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.104 qpair failed and we were unable to recover it. 00:32:59.104 [2024-07-11 23:46:19.909024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.104 [2024-07-11 23:46:19.909185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.104 [2024-07-11 23:46:19.909221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.104 [2024-07-11 23:46:19.909238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.104 [2024-07-11 23:46:19.909252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.104 [2024-07-11 23:46:19.909283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.104 qpair failed and we were unable to recover it. 00:32:59.104 [2024-07-11 23:46:19.919095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.104 [2024-07-11 23:46:19.919312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.104 [2024-07-11 23:46:19.919341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.105 [2024-07-11 23:46:19.919357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.105 [2024-07-11 23:46:19.919370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.105 [2024-07-11 23:46:19.919403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.105 qpair failed and we were unable to recover it. 00:32:59.105 [2024-07-11 23:46:19.929135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.105 [2024-07-11 23:46:19.929293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.105 [2024-07-11 23:46:19.929321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.105 [2024-07-11 23:46:19.929337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.105 [2024-07-11 23:46:19.929349] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.105 [2024-07-11 23:46:19.929381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.105 qpair failed and we were unable to recover it. 00:32:59.105 [2024-07-11 23:46:19.939164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.105 [2024-07-11 23:46:19.939319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.105 [2024-07-11 23:46:19.939348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.105 [2024-07-11 23:46:19.939364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.105 [2024-07-11 23:46:19.939377] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.105 [2024-07-11 23:46:19.939408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.105 qpair failed and we were unable to recover it. 00:32:59.105 [2024-07-11 23:46:19.949154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.105 [2024-07-11 23:46:19.949308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.105 [2024-07-11 23:46:19.949337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.105 [2024-07-11 23:46:19.949353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.105 [2024-07-11 23:46:19.949365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.105 [2024-07-11 23:46:19.949402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.105 qpair failed and we were unable to recover it. 00:32:59.105 [2024-07-11 23:46:19.959183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.105 [2024-07-11 23:46:19.959335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.105 [2024-07-11 23:46:19.959364] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.105 [2024-07-11 23:46:19.959379] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.105 [2024-07-11 23:46:19.959392] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.105 [2024-07-11 23:46:19.959424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.105 qpair failed and we were unable to recover it. 00:32:59.105 [2024-07-11 23:46:19.969215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.105 [2024-07-11 23:46:19.969365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.105 [2024-07-11 23:46:19.969393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.105 [2024-07-11 23:46:19.969408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.105 [2024-07-11 23:46:19.969421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.105 [2024-07-11 23:46:19.969452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.105 qpair failed and we were unable to recover it. 00:32:59.105 [2024-07-11 23:46:19.979307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.105 [2024-07-11 23:46:19.979494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.105 [2024-07-11 23:46:19.979523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.105 [2024-07-11 23:46:19.979538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.105 [2024-07-11 23:46:19.979551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.105 [2024-07-11 23:46:19.979581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.105 qpair failed and we were unable to recover it. 00:32:59.105 [2024-07-11 23:46:19.989341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.105 [2024-07-11 23:46:19.989496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.105 [2024-07-11 23:46:19.989525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.105 [2024-07-11 23:46:19.989540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.105 [2024-07-11 23:46:19.989553] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.105 [2024-07-11 23:46:19.989584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.105 qpair failed and we were unable to recover it. 00:32:59.105 [2024-07-11 23:46:19.999537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.105 [2024-07-11 23:46:19.999712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.105 [2024-07-11 23:46:19.999746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.105 [2024-07-11 23:46:19.999762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.105 [2024-07-11 23:46:19.999775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.105 [2024-07-11 23:46:19.999805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.105 qpair failed and we were unable to recover it. 00:32:59.105 [2024-07-11 23:46:20.009461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.105 [2024-07-11 23:46:20.009681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.105 [2024-07-11 23:46:20.009711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.105 [2024-07-11 23:46:20.009726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.105 [2024-07-11 23:46:20.009740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.105 [2024-07-11 23:46:20.009772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.105 qpair failed and we were unable to recover it. 00:32:59.105 [2024-07-11 23:46:20.019441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.105 [2024-07-11 23:46:20.019608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.105 [2024-07-11 23:46:20.019638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.105 [2024-07-11 23:46:20.019655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.105 [2024-07-11 23:46:20.019668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.105 [2024-07-11 23:46:20.019701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.105 qpair failed and we were unable to recover it. 00:32:59.105 [2024-07-11 23:46:20.029472] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.105 [2024-07-11 23:46:20.029631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.105 [2024-07-11 23:46:20.029672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.105 [2024-07-11 23:46:20.029687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.105 [2024-07-11 23:46:20.029702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.105 [2024-07-11 23:46:20.029733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.105 qpair failed and we were unable to recover it. 00:32:59.105 [2024-07-11 23:46:20.039456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.105 [2024-07-11 23:46:20.039611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.105 [2024-07-11 23:46:20.039647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.105 [2024-07-11 23:46:20.039663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.105 [2024-07-11 23:46:20.039677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.105 [2024-07-11 23:46:20.039716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.105 qpair failed and we were unable to recover it. 00:32:59.105 [2024-07-11 23:46:20.049497] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.105 [2024-07-11 23:46:20.049656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.105 [2024-07-11 23:46:20.049686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.105 [2024-07-11 23:46:20.049702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.105 [2024-07-11 23:46:20.049716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.105 [2024-07-11 23:46:20.049747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.105 qpair failed and we were unable to recover it. 00:32:59.364 [2024-07-11 23:46:20.059466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.364 [2024-07-11 23:46:20.059620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.364 [2024-07-11 23:46:20.059659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.364 [2024-07-11 23:46:20.059674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.364 [2024-07-11 23:46:20.059688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.364 [2024-07-11 23:46:20.059719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.364 qpair failed and we were unable to recover it. 00:32:59.364 [2024-07-11 23:46:20.069520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.364 [2024-07-11 23:46:20.069685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.364 [2024-07-11 23:46:20.069714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.364 [2024-07-11 23:46:20.069730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.364 [2024-07-11 23:46:20.069744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.364 [2024-07-11 23:46:20.069774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.364 qpair failed and we were unable to recover it. 00:32:59.364 [2024-07-11 23:46:20.079521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.364 [2024-07-11 23:46:20.079676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.364 [2024-07-11 23:46:20.079704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.364 [2024-07-11 23:46:20.079719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.364 [2024-07-11 23:46:20.079733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.364 [2024-07-11 23:46:20.079763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.364 qpair failed and we were unable to recover it. 00:32:59.364 [2024-07-11 23:46:20.089556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.364 [2024-07-11 23:46:20.089743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.364 [2024-07-11 23:46:20.089778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.364 [2024-07-11 23:46:20.089794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.364 [2024-07-11 23:46:20.089808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.364 [2024-07-11 23:46:20.089838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.364 qpair failed and we were unable to recover it. 00:32:59.364 [2024-07-11 23:46:20.099543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.364 [2024-07-11 23:46:20.099707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.364 [2024-07-11 23:46:20.099736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.364 [2024-07-11 23:46:20.099751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.364 [2024-07-11 23:46:20.099764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.364 [2024-07-11 23:46:20.099794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.364 qpair failed and we were unable to recover it. 00:32:59.364 [2024-07-11 23:46:20.109594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.364 [2024-07-11 23:46:20.109747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.364 [2024-07-11 23:46:20.109776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.364 [2024-07-11 23:46:20.109792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.364 [2024-07-11 23:46:20.109805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.364 [2024-07-11 23:46:20.109836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.364 qpair failed and we were unable to recover it. 00:32:59.364 [2024-07-11 23:46:20.119602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.364 [2024-07-11 23:46:20.119784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.364 [2024-07-11 23:46:20.119813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.364 [2024-07-11 23:46:20.119828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.364 [2024-07-11 23:46:20.119841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.364 [2024-07-11 23:46:20.119876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.364 qpair failed and we were unable to recover it. 00:32:59.364 [2024-07-11 23:46:20.129642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.364 [2024-07-11 23:46:20.129794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.364 [2024-07-11 23:46:20.129823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.364 [2024-07-11 23:46:20.129839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.364 [2024-07-11 23:46:20.129852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.364 [2024-07-11 23:46:20.129888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.364 qpair failed and we were unable to recover it. 00:32:59.364 [2024-07-11 23:46:20.139690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.364 [2024-07-11 23:46:20.139858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.364 [2024-07-11 23:46:20.139887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.364 [2024-07-11 23:46:20.139904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.364 [2024-07-11 23:46:20.139917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.364 [2024-07-11 23:46:20.139947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.364 qpair failed and we were unable to recover it. 00:32:59.364 [2024-07-11 23:46:20.149708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.364 [2024-07-11 23:46:20.149856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.364 [2024-07-11 23:46:20.149884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.364 [2024-07-11 23:46:20.149900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.365 [2024-07-11 23:46:20.149914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.365 [2024-07-11 23:46:20.149945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.365 qpair failed and we were unable to recover it. 00:32:59.365 [2024-07-11 23:46:20.159775] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.365 [2024-07-11 23:46:20.159956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.365 [2024-07-11 23:46:20.159986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.365 [2024-07-11 23:46:20.160001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.365 [2024-07-11 23:46:20.160015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.365 [2024-07-11 23:46:20.160045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.365 qpair failed and we were unable to recover it. 00:32:59.365 [2024-07-11 23:46:20.169767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.365 [2024-07-11 23:46:20.169920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.365 [2024-07-11 23:46:20.169948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.365 [2024-07-11 23:46:20.169964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.365 [2024-07-11 23:46:20.169977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.365 [2024-07-11 23:46:20.170008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.365 qpair failed and we were unable to recover it. 00:32:59.365 [2024-07-11 23:46:20.179786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.365 [2024-07-11 23:46:20.179936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.365 [2024-07-11 23:46:20.179970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.365 [2024-07-11 23:46:20.179986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.365 [2024-07-11 23:46:20.179999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.365 [2024-07-11 23:46:20.180029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.365 qpair failed and we were unable to recover it. 00:32:59.365 [2024-07-11 23:46:20.189843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.365 [2024-07-11 23:46:20.189997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.365 [2024-07-11 23:46:20.190026] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.365 [2024-07-11 23:46:20.190042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.365 [2024-07-11 23:46:20.190055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.365 [2024-07-11 23:46:20.190085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.365 qpair failed and we were unable to recover it. 00:32:59.365 [2024-07-11 23:46:20.199845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.365 [2024-07-11 23:46:20.200043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.365 [2024-07-11 23:46:20.200072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.365 [2024-07-11 23:46:20.200088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.365 [2024-07-11 23:46:20.200101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.365 [2024-07-11 23:46:20.200131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.365 qpair failed and we were unable to recover it. 00:32:59.365 [2024-07-11 23:46:20.209893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.365 [2024-07-11 23:46:20.210052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.365 [2024-07-11 23:46:20.210081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.365 [2024-07-11 23:46:20.210096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.365 [2024-07-11 23:46:20.210109] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.365 [2024-07-11 23:46:20.210150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.365 qpair failed and we were unable to recover it. 00:32:59.365 [2024-07-11 23:46:20.219954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.365 [2024-07-11 23:46:20.220115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.365 [2024-07-11 23:46:20.220151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.365 [2024-07-11 23:46:20.220168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.365 [2024-07-11 23:46:20.220182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.365 [2024-07-11 23:46:20.220218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.365 qpair failed and we were unable to recover it. 00:32:59.365 [2024-07-11 23:46:20.229972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.365 [2024-07-11 23:46:20.230125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.365 [2024-07-11 23:46:20.230163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.365 [2024-07-11 23:46:20.230179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.365 [2024-07-11 23:46:20.230192] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.365 [2024-07-11 23:46:20.230223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.365 qpair failed and we were unable to recover it. 00:32:59.365 [2024-07-11 23:46:20.239973] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.365 [2024-07-11 23:46:20.240130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.365 [2024-07-11 23:46:20.240173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.365 [2024-07-11 23:46:20.240190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.365 [2024-07-11 23:46:20.240203] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.365 [2024-07-11 23:46:20.240235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.365 qpair failed and we were unable to recover it. 00:32:59.365 [2024-07-11 23:46:20.250055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.365 [2024-07-11 23:46:20.250216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.365 [2024-07-11 23:46:20.250245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.365 [2024-07-11 23:46:20.250259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.365 [2024-07-11 23:46:20.250273] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.365 [2024-07-11 23:46:20.250304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.365 qpair failed and we were unable to recover it. 00:32:59.365 [2024-07-11 23:46:20.260039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.365 [2024-07-11 23:46:20.260195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.365 [2024-07-11 23:46:20.260224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.365 [2024-07-11 23:46:20.260239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.366 [2024-07-11 23:46:20.260252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.366 [2024-07-11 23:46:20.260283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.366 qpair failed and we were unable to recover it. 00:32:59.366 [2024-07-11 23:46:20.270078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.366 [2024-07-11 23:46:20.270233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.366 [2024-07-11 23:46:20.270268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.366 [2024-07-11 23:46:20.270284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.366 [2024-07-11 23:46:20.270297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.366 [2024-07-11 23:46:20.270328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.366 qpair failed and we were unable to recover it. 00:32:59.366 [2024-07-11 23:46:20.280101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.366 [2024-07-11 23:46:20.280259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.366 [2024-07-11 23:46:20.280288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.366 [2024-07-11 23:46:20.280304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.366 [2024-07-11 23:46:20.280317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.366 [2024-07-11 23:46:20.280348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.366 qpair failed and we were unable to recover it. 00:32:59.366 [2024-07-11 23:46:20.290122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.366 [2024-07-11 23:46:20.290278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.366 [2024-07-11 23:46:20.290307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.366 [2024-07-11 23:46:20.290322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.366 [2024-07-11 23:46:20.290335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.366 [2024-07-11 23:46:20.290365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.366 qpair failed and we were unable to recover it. 00:32:59.366 [2024-07-11 23:46:20.300149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.366 [2024-07-11 23:46:20.300295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.366 [2024-07-11 23:46:20.300323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.366 [2024-07-11 23:46:20.300339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.366 [2024-07-11 23:46:20.300351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.366 [2024-07-11 23:46:20.300382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.366 qpair failed and we were unable to recover it. 00:32:59.366 [2024-07-11 23:46:20.310210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.366 [2024-07-11 23:46:20.310383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.366 [2024-07-11 23:46:20.310412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.366 [2024-07-11 23:46:20.310427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.366 [2024-07-11 23:46:20.310446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.366 [2024-07-11 23:46:20.310479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.366 qpair failed and we were unable to recover it. 00:32:59.624 [2024-07-11 23:46:20.320208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.624 [2024-07-11 23:46:20.320371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.624 [2024-07-11 23:46:20.320400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.624 [2024-07-11 23:46:20.320415] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.624 [2024-07-11 23:46:20.320428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.624 [2024-07-11 23:46:20.320459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.624 qpair failed and we were unable to recover it. 00:32:59.624 [2024-07-11 23:46:20.330242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.624 [2024-07-11 23:46:20.330393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.624 [2024-07-11 23:46:20.330421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.624 [2024-07-11 23:46:20.330437] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.624 [2024-07-11 23:46:20.330449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.624 [2024-07-11 23:46:20.330480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.624 qpair failed and we were unable to recover it. 00:32:59.624 [2024-07-11 23:46:20.340271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.624 [2024-07-11 23:46:20.340427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.624 [2024-07-11 23:46:20.340455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.624 [2024-07-11 23:46:20.340470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.624 [2024-07-11 23:46:20.340483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.624 [2024-07-11 23:46:20.340514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.624 qpair failed and we were unable to recover it. 00:32:59.624 [2024-07-11 23:46:20.350327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.624 [2024-07-11 23:46:20.350484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.624 [2024-07-11 23:46:20.350513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.624 [2024-07-11 23:46:20.350528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.624 [2024-07-11 23:46:20.350541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.624 [2024-07-11 23:46:20.350572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.624 qpair failed and we were unable to recover it. 00:32:59.624 [2024-07-11 23:46:20.360328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.624 [2024-07-11 23:46:20.360482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.624 [2024-07-11 23:46:20.360510] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.624 [2024-07-11 23:46:20.360526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.624 [2024-07-11 23:46:20.360539] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.624 [2024-07-11 23:46:20.360570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.624 qpair failed and we were unable to recover it. 00:32:59.624 [2024-07-11 23:46:20.370346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.624 [2024-07-11 23:46:20.370491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.624 [2024-07-11 23:46:20.370520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.624 [2024-07-11 23:46:20.370536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.624 [2024-07-11 23:46:20.370549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.624 [2024-07-11 23:46:20.370579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.624 qpair failed and we were unable to recover it. 00:32:59.624 [2024-07-11 23:46:20.380397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.624 [2024-07-11 23:46:20.380576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.624 [2024-07-11 23:46:20.380604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.624 [2024-07-11 23:46:20.380619] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.624 [2024-07-11 23:46:20.380632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.624 [2024-07-11 23:46:20.380663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.624 qpair failed and we were unable to recover it. 00:32:59.624 [2024-07-11 23:46:20.390436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.624 [2024-07-11 23:46:20.390591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.624 [2024-07-11 23:46:20.390619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.624 [2024-07-11 23:46:20.390635] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.624 [2024-07-11 23:46:20.390648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.624 [2024-07-11 23:46:20.390677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.624 qpair failed and we were unable to recover it. 00:32:59.624 [2024-07-11 23:46:20.400433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.624 [2024-07-11 23:46:20.400583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.624 [2024-07-11 23:46:20.400612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.624 [2024-07-11 23:46:20.400627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.624 [2024-07-11 23:46:20.400647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.624 [2024-07-11 23:46:20.400678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.624 qpair failed and we were unable to recover it. 00:32:59.624 [2024-07-11 23:46:20.410494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.624 [2024-07-11 23:46:20.410638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.624 [2024-07-11 23:46:20.410666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.624 [2024-07-11 23:46:20.410682] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.410695] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.410725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.420498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.420685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.420713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.420728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.420741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.420772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.430548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.430703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.430731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.430747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.430760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.430790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.440559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.440714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.440743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.440758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.440771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.440802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.450635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.450807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.450836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.450851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.450864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.450895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.460607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.460753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.460782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.460797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.460810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.460841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.470679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.470847] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.470876] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.470892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.470905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.470935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.480657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.480805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.480833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.480849] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.480862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.480893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.490701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.490854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.490882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.490898] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.490917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.490947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.500774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.500953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.500982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.500997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.501010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.501041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.510809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.510972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.511001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.511016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.511029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.511059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.520783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.520948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.520976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.520991] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.521004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.521035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.530879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.531033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.531062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.531077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.531091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.531121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.540867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.541039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.541067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.541083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.541096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.541126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.550917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.551079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.551107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.551123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.551136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.551183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.560901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.561067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.561096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.561111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.561124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.561163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.625 [2024-07-11 23:46:20.570938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.625 [2024-07-11 23:46:20.571090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.625 [2024-07-11 23:46:20.571118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.625 [2024-07-11 23:46:20.571133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.625 [2024-07-11 23:46:20.571158] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.625 [2024-07-11 23:46:20.571190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.625 qpair failed and we were unable to recover it. 00:32:59.885 [2024-07-11 23:46:20.580993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.885 [2024-07-11 23:46:20.581150] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.885 [2024-07-11 23:46:20.581179] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.885 [2024-07-11 23:46:20.581194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.885 [2024-07-11 23:46:20.581218] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.885 [2024-07-11 23:46:20.581249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.885 qpair failed and we were unable to recover it. 00:32:59.885 [2024-07-11 23:46:20.590997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.885 [2024-07-11 23:46:20.591195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.885 [2024-07-11 23:46:20.591223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.885 [2024-07-11 23:46:20.591239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.885 [2024-07-11 23:46:20.591252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.885 [2024-07-11 23:46:20.591282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.885 qpair failed and we were unable to recover it. 00:32:59.885 [2024-07-11 23:46:20.601011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.885 [2024-07-11 23:46:20.601173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.885 [2024-07-11 23:46:20.601202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.885 [2024-07-11 23:46:20.601217] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.885 [2024-07-11 23:46:20.601230] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.885 [2024-07-11 23:46:20.601261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.885 qpair failed and we were unable to recover it. 00:32:59.885 [2024-07-11 23:46:20.611051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.885 [2024-07-11 23:46:20.611210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.885 [2024-07-11 23:46:20.611238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.885 [2024-07-11 23:46:20.611253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.885 [2024-07-11 23:46:20.611267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.885 [2024-07-11 23:46:20.611297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.885 qpair failed and we were unable to recover it. 00:32:59.885 [2024-07-11 23:46:20.621131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.885 [2024-07-11 23:46:20.621285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.885 [2024-07-11 23:46:20.621314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.885 [2024-07-11 23:46:20.621329] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.885 [2024-07-11 23:46:20.621342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.885 [2024-07-11 23:46:20.621374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.885 qpair failed and we were unable to recover it. 00:32:59.885 [2024-07-11 23:46:20.631131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.885 [2024-07-11 23:46:20.631293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.885 [2024-07-11 23:46:20.631322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.885 [2024-07-11 23:46:20.631337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.631351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.631381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.641150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.886 [2024-07-11 23:46:20.641304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.886 [2024-07-11 23:46:20.641332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.886 [2024-07-11 23:46:20.641348] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.641361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.641391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.651172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.886 [2024-07-11 23:46:20.651320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.886 [2024-07-11 23:46:20.651349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.886 [2024-07-11 23:46:20.651365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.651378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.651409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.661228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.886 [2024-07-11 23:46:20.661376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.886 [2024-07-11 23:46:20.661404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.886 [2024-07-11 23:46:20.661419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.661433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.661464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.671264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.886 [2024-07-11 23:46:20.671462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.886 [2024-07-11 23:46:20.671491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.886 [2024-07-11 23:46:20.671513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.671527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.671558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.681298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.886 [2024-07-11 23:46:20.681480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.886 [2024-07-11 23:46:20.681509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.886 [2024-07-11 23:46:20.681525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.681538] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.681568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.691295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.886 [2024-07-11 23:46:20.691443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.886 [2024-07-11 23:46:20.691472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.886 [2024-07-11 23:46:20.691488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.691501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.691531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.701301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.886 [2024-07-11 23:46:20.701446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.886 [2024-07-11 23:46:20.701474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.886 [2024-07-11 23:46:20.701490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.701503] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.701534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.711409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.886 [2024-07-11 23:46:20.711593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.886 [2024-07-11 23:46:20.711622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.886 [2024-07-11 23:46:20.711637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.711651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.711682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.721397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.886 [2024-07-11 23:46:20.721556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.886 [2024-07-11 23:46:20.721584] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.886 [2024-07-11 23:46:20.721599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.721612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.721644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.731412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.886 [2024-07-11 23:46:20.731568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.886 [2024-07-11 23:46:20.731597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.886 [2024-07-11 23:46:20.731612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.731625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.731655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.741475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.886 [2024-07-11 23:46:20.741627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.886 [2024-07-11 23:46:20.741653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.886 [2024-07-11 23:46:20.741669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.741682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.741713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.751492] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.886 [2024-07-11 23:46:20.751685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.886 [2024-07-11 23:46:20.751714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.886 [2024-07-11 23:46:20.751729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.751742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.751773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.761501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.886 [2024-07-11 23:46:20.761654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.886 [2024-07-11 23:46:20.761683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.886 [2024-07-11 23:46:20.761704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.761717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.761748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.771523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.886 [2024-07-11 23:46:20.771680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.886 [2024-07-11 23:46:20.771709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.886 [2024-07-11 23:46:20.771724] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.886 [2024-07-11 23:46:20.771738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.886 [2024-07-11 23:46:20.771768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.886 qpair failed and we were unable to recover it. 00:32:59.886 [2024-07-11 23:46:20.781590] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.887 [2024-07-11 23:46:20.781772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.887 [2024-07-11 23:46:20.781800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.887 [2024-07-11 23:46:20.781816] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.887 [2024-07-11 23:46:20.781828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.887 [2024-07-11 23:46:20.781859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.887 qpair failed and we were unable to recover it. 00:32:59.887 [2024-07-11 23:46:20.791601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.887 [2024-07-11 23:46:20.791784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.887 [2024-07-11 23:46:20.791813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.887 [2024-07-11 23:46:20.791829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.887 [2024-07-11 23:46:20.791842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.887 [2024-07-11 23:46:20.791872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.887 qpair failed and we were unable to recover it. 00:32:59.887 [2024-07-11 23:46:20.801619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.887 [2024-07-11 23:46:20.801765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.887 [2024-07-11 23:46:20.801794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.887 [2024-07-11 23:46:20.801809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.887 [2024-07-11 23:46:20.801822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.887 [2024-07-11 23:46:20.801852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.887 qpair failed and we were unable to recover it. 00:32:59.887 [2024-07-11 23:46:20.811636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.887 [2024-07-11 23:46:20.811786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.887 [2024-07-11 23:46:20.811814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.887 [2024-07-11 23:46:20.811830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.887 [2024-07-11 23:46:20.811843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.887 [2024-07-11 23:46:20.811873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.887 qpair failed and we were unable to recover it. 00:32:59.887 [2024-07-11 23:46:20.821730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.887 [2024-07-11 23:46:20.821912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.887 [2024-07-11 23:46:20.821940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.887 [2024-07-11 23:46:20.821955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.887 [2024-07-11 23:46:20.821968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.887 [2024-07-11 23:46:20.822000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.887 qpair failed and we were unable to recover it. 00:32:59.887 [2024-07-11 23:46:20.831709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.887 [2024-07-11 23:46:20.831865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.887 [2024-07-11 23:46:20.831895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.887 [2024-07-11 23:46:20.831910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.887 [2024-07-11 23:46:20.831923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:32:59.887 [2024-07-11 23:46:20.831953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.887 qpair failed and we were unable to recover it. 00:33:00.161 [2024-07-11 23:46:20.841753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.161 [2024-07-11 23:46:20.841907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.161 [2024-07-11 23:46:20.841936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.161 [2024-07-11 23:46:20.841951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.161 [2024-07-11 23:46:20.841965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.161 [2024-07-11 23:46:20.841995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.161 qpair failed and we were unable to recover it. 00:33:00.161 [2024-07-11 23:46:20.851803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.161 [2024-07-11 23:46:20.851955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.161 [2024-07-11 23:46:20.851984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.161 [2024-07-11 23:46:20.852005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.161 [2024-07-11 23:46:20.852019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.161 [2024-07-11 23:46:20.852050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.161 qpair failed and we were unable to recover it. 00:33:00.161 [2024-07-11 23:46:20.861820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.161 [2024-07-11 23:46:20.861974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.161 [2024-07-11 23:46:20.862002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.161 [2024-07-11 23:46:20.862018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.161 [2024-07-11 23:46:20.862031] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.161 [2024-07-11 23:46:20.862061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.161 qpair failed and we were unable to recover it. 00:33:00.161 [2024-07-11 23:46:20.871857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:20.872052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:20.872081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.162 [2024-07-11 23:46:20.872097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.162 [2024-07-11 23:46:20.872109] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.162 [2024-07-11 23:46:20.872148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.162 qpair failed and we were unable to recover it. 00:33:00.162 [2024-07-11 23:46:20.881905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:20.882070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:20.882098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.162 [2024-07-11 23:46:20.882113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.162 [2024-07-11 23:46:20.882126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.162 [2024-07-11 23:46:20.882167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.162 qpair failed and we were unable to recover it. 00:33:00.162 [2024-07-11 23:46:20.891906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:20.892087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:20.892115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.162 [2024-07-11 23:46:20.892131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.162 [2024-07-11 23:46:20.892154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.162 [2024-07-11 23:46:20.892186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.162 qpair failed and we were unable to recover it. 00:33:00.162 [2024-07-11 23:46:20.901985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:20.902158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:20.902186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.162 [2024-07-11 23:46:20.902202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.162 [2024-07-11 23:46:20.902216] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.162 [2024-07-11 23:46:20.902246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.162 qpair failed and we were unable to recover it. 00:33:00.162 [2024-07-11 23:46:20.911997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:20.912179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:20.912208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.162 [2024-07-11 23:46:20.912224] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.162 [2024-07-11 23:46:20.912238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.162 [2024-07-11 23:46:20.912269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.162 qpair failed and we were unable to recover it. 00:33:00.162 [2024-07-11 23:46:20.921984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:20.922148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:20.922177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.162 [2024-07-11 23:46:20.922192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.162 [2024-07-11 23:46:20.922205] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.162 [2024-07-11 23:46:20.922236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.162 qpair failed and we were unable to recover it. 00:33:00.162 [2024-07-11 23:46:20.932019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:20.932201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:20.932230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.162 [2024-07-11 23:46:20.932245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.162 [2024-07-11 23:46:20.932258] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.162 [2024-07-11 23:46:20.932289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.162 qpair failed and we were unable to recover it. 00:33:00.162 [2024-07-11 23:46:20.942055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:20.942217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:20.942245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.162 [2024-07-11 23:46:20.942266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.162 [2024-07-11 23:46:20.942281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.162 [2024-07-11 23:46:20.942312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.162 qpair failed and we were unable to recover it. 00:33:00.162 [2024-07-11 23:46:20.952114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:20.952278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:20.952307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.162 [2024-07-11 23:46:20.952323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.162 [2024-07-11 23:46:20.952337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.162 [2024-07-11 23:46:20.952368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.162 qpair failed and we were unable to recover it. 00:33:00.162 [2024-07-11 23:46:20.962119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:20.962277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:20.962307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.162 [2024-07-11 23:46:20.962324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.162 [2024-07-11 23:46:20.962338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.162 [2024-07-11 23:46:20.962370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.162 qpair failed and we were unable to recover it. 00:33:00.162 [2024-07-11 23:46:20.972157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:20.972310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:20.972339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.162 [2024-07-11 23:46:20.972355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.162 [2024-07-11 23:46:20.972369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.162 [2024-07-11 23:46:20.972400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.162 qpair failed and we were unable to recover it. 00:33:00.162 [2024-07-11 23:46:20.982229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:20.982377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:20.982406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.162 [2024-07-11 23:46:20.982422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.162 [2024-07-11 23:46:20.982435] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.162 [2024-07-11 23:46:20.982467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.162 qpair failed and we were unable to recover it. 00:33:00.162 [2024-07-11 23:46:20.992234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:20.992386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:20.992415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.162 [2024-07-11 23:46:20.992430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.162 [2024-07-11 23:46:20.992444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.162 [2024-07-11 23:46:20.992475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.162 qpair failed and we were unable to recover it. 00:33:00.162 [2024-07-11 23:46:21.002243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:21.002394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:21.002423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.162 [2024-07-11 23:46:21.002438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.162 [2024-07-11 23:46:21.002451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.162 [2024-07-11 23:46:21.002481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.162 qpair failed and we were unable to recover it. 00:33:00.162 [2024-07-11 23:46:21.012311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.162 [2024-07-11 23:46:21.012512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.162 [2024-07-11 23:46:21.012542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.163 [2024-07-11 23:46:21.012557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.163 [2024-07-11 23:46:21.012570] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.163 [2024-07-11 23:46:21.012601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.163 qpair failed and we were unable to recover it. 00:33:00.163 [2024-07-11 23:46:21.022347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.163 [2024-07-11 23:46:21.022498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.163 [2024-07-11 23:46:21.022525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.163 [2024-07-11 23:46:21.022540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.163 [2024-07-11 23:46:21.022554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.163 [2024-07-11 23:46:21.022584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.163 qpair failed and we were unable to recover it. 00:33:00.163 [2024-07-11 23:46:21.032388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.163 [2024-07-11 23:46:21.032573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.163 [2024-07-11 23:46:21.032607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.163 [2024-07-11 23:46:21.032624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.163 [2024-07-11 23:46:21.032637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.163 [2024-07-11 23:46:21.032669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.163 qpair failed and we were unable to recover it. 00:33:00.163 [2024-07-11 23:46:21.042429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.163 [2024-07-11 23:46:21.042667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.163 [2024-07-11 23:46:21.042696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.163 [2024-07-11 23:46:21.042711] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.163 [2024-07-11 23:46:21.042724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.163 [2024-07-11 23:46:21.042755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.163 qpair failed and we were unable to recover it. 00:33:00.163 [2024-07-11 23:46:21.052401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.163 [2024-07-11 23:46:21.052569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.163 [2024-07-11 23:46:21.052599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.163 [2024-07-11 23:46:21.052614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.163 [2024-07-11 23:46:21.052628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.163 [2024-07-11 23:46:21.052659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.163 qpair failed and we were unable to recover it. 00:33:00.163 [2024-07-11 23:46:21.062415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.163 [2024-07-11 23:46:21.062567] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.163 [2024-07-11 23:46:21.062596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.163 [2024-07-11 23:46:21.062612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.163 [2024-07-11 23:46:21.062625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.163 [2024-07-11 23:46:21.062656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.163 qpair failed and we were unable to recover it. 00:33:00.163 [2024-07-11 23:46:21.072470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.163 [2024-07-11 23:46:21.072671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.163 [2024-07-11 23:46:21.072700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.163 [2024-07-11 23:46:21.072717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.163 [2024-07-11 23:46:21.072730] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.163 [2024-07-11 23:46:21.072761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.163 qpair failed and we were unable to recover it. 00:33:00.163 [2024-07-11 23:46:21.082454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.163 [2024-07-11 23:46:21.082610] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.163 [2024-07-11 23:46:21.082639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.163 [2024-07-11 23:46:21.082654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.163 [2024-07-11 23:46:21.082668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.163 [2024-07-11 23:46:21.082699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.163 qpair failed and we were unable to recover it. 00:33:00.163 [2024-07-11 23:46:21.092550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.163 [2024-07-11 23:46:21.092701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.163 [2024-07-11 23:46:21.092730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.163 [2024-07-11 23:46:21.092746] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.163 [2024-07-11 23:46:21.092759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.163 [2024-07-11 23:46:21.092789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.163 qpair failed and we were unable to recover it. 00:33:00.163 [2024-07-11 23:46:21.102566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.163 [2024-07-11 23:46:21.102718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.163 [2024-07-11 23:46:21.102746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.163 [2024-07-11 23:46:21.102762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.163 [2024-07-11 23:46:21.102776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.163 [2024-07-11 23:46:21.102807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.163 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-11 23:46:21.112721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.422 [2024-07-11 23:46:21.112916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.422 [2024-07-11 23:46:21.112945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.422 [2024-07-11 23:46:21.112961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.422 [2024-07-11 23:46:21.112974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.422 [2024-07-11 23:46:21.113005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-11 23:46:21.122577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.422 [2024-07-11 23:46:21.122729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.422 [2024-07-11 23:46:21.122764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.422 [2024-07-11 23:46:21.122781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.422 [2024-07-11 23:46:21.122794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.422 [2024-07-11 23:46:21.122825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-11 23:46:21.132720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.422 [2024-07-11 23:46:21.132995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.422 [2024-07-11 23:46:21.133024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.422 [2024-07-11 23:46:21.133039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.422 [2024-07-11 23:46:21.133052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.422 [2024-07-11 23:46:21.133083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-11 23:46:21.142692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.422 [2024-07-11 23:46:21.142839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.422 [2024-07-11 23:46:21.142867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.422 [2024-07-11 23:46:21.142883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.422 [2024-07-11 23:46:21.142896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.422 [2024-07-11 23:46:21.142926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-11 23:46:21.152675] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.423 [2024-07-11 23:46:21.152834] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.423 [2024-07-11 23:46:21.152863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.423 [2024-07-11 23:46:21.152879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.423 [2024-07-11 23:46:21.152892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.423 [2024-07-11 23:46:21.152922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-11 23:46:21.162731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.423 [2024-07-11 23:46:21.162882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.423 [2024-07-11 23:46:21.162920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.423 [2024-07-11 23:46:21.162936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.423 [2024-07-11 23:46:21.162949] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.423 [2024-07-11 23:46:21.162986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-11 23:46:21.172712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.423 [2024-07-11 23:46:21.172862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.423 [2024-07-11 23:46:21.172892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.423 [2024-07-11 23:46:21.172908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.423 [2024-07-11 23:46:21.172921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.423 [2024-07-11 23:46:21.172951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-11 23:46:21.182780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.423 [2024-07-11 23:46:21.182931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.423 [2024-07-11 23:46:21.182960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.423 [2024-07-11 23:46:21.182976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.423 [2024-07-11 23:46:21.182988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.423 [2024-07-11 23:46:21.183019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-11 23:46:21.192844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.423 [2024-07-11 23:46:21.193000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.423 [2024-07-11 23:46:21.193037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.423 [2024-07-11 23:46:21.193053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.423 [2024-07-11 23:46:21.193067] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.423 [2024-07-11 23:46:21.193097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-11 23:46:21.202846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.423 [2024-07-11 23:46:21.202996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.423 [2024-07-11 23:46:21.203024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.423 [2024-07-11 23:46:21.203040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.423 [2024-07-11 23:46:21.203053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.423 [2024-07-11 23:46:21.203084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-11 23:46:21.212921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.423 [2024-07-11 23:46:21.213074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.423 [2024-07-11 23:46:21.213108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.423 [2024-07-11 23:46:21.213124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.423 [2024-07-11 23:46:21.213146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.423 [2024-07-11 23:46:21.213190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-11 23:46:21.222864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.423 [2024-07-11 23:46:21.223048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.423 [2024-07-11 23:46:21.223077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.423 [2024-07-11 23:46:21.223092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.423 [2024-07-11 23:46:21.223105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.423 [2024-07-11 23:46:21.223135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-11 23:46:21.232955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.423 [2024-07-11 23:46:21.233111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.423 [2024-07-11 23:46:21.233147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.423 [2024-07-11 23:46:21.233165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.423 [2024-07-11 23:46:21.233178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.423 [2024-07-11 23:46:21.233208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-11 23:46:21.242942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.423 [2024-07-11 23:46:21.243090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.423 [2024-07-11 23:46:21.243119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.423 [2024-07-11 23:46:21.243134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.423 [2024-07-11 23:46:21.243157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.423 [2024-07-11 23:46:21.243189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-11 23:46:21.253009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.423 [2024-07-11 23:46:21.253181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.423 [2024-07-11 23:46:21.253210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.423 [2024-07-11 23:46:21.253225] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.423 [2024-07-11 23:46:21.253238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.423 [2024-07-11 23:46:21.253278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-11 23:46:21.263001] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.423 [2024-07-11 23:46:21.263163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.423 [2024-07-11 23:46:21.263192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.423 [2024-07-11 23:46:21.263208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.423 [2024-07-11 23:46:21.263221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.423 [2024-07-11 23:46:21.263252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-11 23:46:21.273051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.423 [2024-07-11 23:46:21.273227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.423 [2024-07-11 23:46:21.273256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.423 [2024-07-11 23:46:21.273272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.424 [2024-07-11 23:46:21.273285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.424 [2024-07-11 23:46:21.273316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-11 23:46:21.283057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.424 [2024-07-11 23:46:21.283213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.424 [2024-07-11 23:46:21.283241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.424 [2024-07-11 23:46:21.283257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.424 [2024-07-11 23:46:21.283269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.424 [2024-07-11 23:46:21.283301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-11 23:46:21.293176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.424 [2024-07-11 23:46:21.293323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.424 [2024-07-11 23:46:21.293353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.424 [2024-07-11 23:46:21.293368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.424 [2024-07-11 23:46:21.293381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.424 [2024-07-11 23:46:21.293412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-11 23:46:21.303211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.424 [2024-07-11 23:46:21.303415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.424 [2024-07-11 23:46:21.303450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.424 [2024-07-11 23:46:21.303467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.424 [2024-07-11 23:46:21.303480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.424 [2024-07-11 23:46:21.303511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-11 23:46:21.313187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.424 [2024-07-11 23:46:21.313353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.424 [2024-07-11 23:46:21.313381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.424 [2024-07-11 23:46:21.313397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.424 [2024-07-11 23:46:21.313410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.424 [2024-07-11 23:46:21.313442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-11 23:46:21.323206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.424 [2024-07-11 23:46:21.323360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.424 [2024-07-11 23:46:21.323389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.424 [2024-07-11 23:46:21.323404] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.424 [2024-07-11 23:46:21.323416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.424 [2024-07-11 23:46:21.323447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-11 23:46:21.333242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.424 [2024-07-11 23:46:21.333394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.424 [2024-07-11 23:46:21.333423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.424 [2024-07-11 23:46:21.333439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.424 [2024-07-11 23:46:21.333452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.424 [2024-07-11 23:46:21.333482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-11 23:46:21.343332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.424 [2024-07-11 23:46:21.343507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.424 [2024-07-11 23:46:21.343535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.424 [2024-07-11 23:46:21.343550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.424 [2024-07-11 23:46:21.343564] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.424 [2024-07-11 23:46:21.343600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-11 23:46:21.353316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.424 [2024-07-11 23:46:21.353502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.424 [2024-07-11 23:46:21.353531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.424 [2024-07-11 23:46:21.353546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.424 [2024-07-11 23:46:21.353559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.424 [2024-07-11 23:46:21.353589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-11 23:46:21.363344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.424 [2024-07-11 23:46:21.363539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.424 [2024-07-11 23:46:21.363569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.424 [2024-07-11 23:46:21.363584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.424 [2024-07-11 23:46:21.363597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.424 [2024-07-11 23:46:21.363628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.684 [2024-07-11 23:46:21.373350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.684 [2024-07-11 23:46:21.373505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.684 [2024-07-11 23:46:21.373534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.684 [2024-07-11 23:46:21.373550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.684 [2024-07-11 23:46:21.373563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.684 [2024-07-11 23:46:21.373594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.684 qpair failed and we were unable to recover it. 00:33:00.684 [2024-07-11 23:46:21.383362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.684 [2024-07-11 23:46:21.383515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.684 [2024-07-11 23:46:21.383543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.684 [2024-07-11 23:46:21.383559] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.684 [2024-07-11 23:46:21.383572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.684 [2024-07-11 23:46:21.383603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.684 qpair failed and we were unable to recover it. 00:33:00.684 [2024-07-11 23:46:21.393400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.684 [2024-07-11 23:46:21.393551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.684 [2024-07-11 23:46:21.393585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.684 [2024-07-11 23:46:21.393602] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.684 [2024-07-11 23:46:21.393615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.684 [2024-07-11 23:46:21.393645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.684 qpair failed and we were unable to recover it. 00:33:00.684 [2024-07-11 23:46:21.403539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.684 [2024-07-11 23:46:21.403694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.684 [2024-07-11 23:46:21.403730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.684 [2024-07-11 23:46:21.403745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.684 [2024-07-11 23:46:21.403759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.684 [2024-07-11 23:46:21.403789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.684 qpair failed and we were unable to recover it. 00:33:00.684 [2024-07-11 23:46:21.413483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.684 [2024-07-11 23:46:21.413650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.684 [2024-07-11 23:46:21.413679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.684 [2024-07-11 23:46:21.413694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.684 [2024-07-11 23:46:21.413707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.684 [2024-07-11 23:46:21.413737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.684 qpair failed and we were unable to recover it. 00:33:00.684 [2024-07-11 23:46:21.423538] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.684 [2024-07-11 23:46:21.423712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.684 [2024-07-11 23:46:21.423740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.684 [2024-07-11 23:46:21.423755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.684 [2024-07-11 23:46:21.423768] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.684 [2024-07-11 23:46:21.423798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.684 qpair failed and we were unable to recover it. 00:33:00.684 [2024-07-11 23:46:21.433544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.684 [2024-07-11 23:46:21.433703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.684 [2024-07-11 23:46:21.433731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.684 [2024-07-11 23:46:21.433747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.684 [2024-07-11 23:46:21.433760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.684 [2024-07-11 23:46:21.433795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.684 qpair failed and we were unable to recover it. 00:33:00.684 [2024-07-11 23:46:21.443568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.684 [2024-07-11 23:46:21.443756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.684 [2024-07-11 23:46:21.443784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.684 [2024-07-11 23:46:21.443800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.684 [2024-07-11 23:46:21.443813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.443843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.453596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.453741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.453770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.453786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.453799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.453830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.463656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.463813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.463842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.463857] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.463870] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.463900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.473647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.473801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.473830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.473845] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.473859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.473889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.483654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.483807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.483841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.483858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.483871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.483901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.493694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.493857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.493885] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.493901] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.493914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.493944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.503786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.503935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.503963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.503979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.503993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.504024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.513792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.513975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.514003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.514018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.514031] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.514062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.523775] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.523946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.523975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.523990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.524010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.524041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.533831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.534025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.534053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.534069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.534082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.534112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.543826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.543964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.543992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.544008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.544020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.544052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.553880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.554037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.554067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.554082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.554095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.554126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.563931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.564125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.564169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.564189] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.564202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.564234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.574004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.574191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.574219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.574234] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.574247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.574278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.584091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.584321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.584349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.584364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.584377] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.584409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.594038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.594208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.594237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.594252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.594265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.594295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.604065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.604266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.604295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.604311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.604324] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.604355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.614048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.614213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.614241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.614257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.614275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.614308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.685 [2024-07-11 23:46:21.624098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.685 [2024-07-11 23:46:21.624251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.685 [2024-07-11 23:46:21.624280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.685 [2024-07-11 23:46:21.624295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.685 [2024-07-11 23:46:21.624308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.685 [2024-07-11 23:46:21.624339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.685 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.634161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.634337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.634366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.634382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.634396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.634427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.644155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.644320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.644348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.644364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.644377] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.644407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.654175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.654332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.654361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.654377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.654390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.654420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.664216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.664373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.664402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.664417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.664430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.664461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.674235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.674392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.674421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.674436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.674450] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.674480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.684247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.684396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.684424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.684439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.684452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.684483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.694316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.694477] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.694506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.694521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.694534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.694564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.704314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.704493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.704522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.704537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.704557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.704588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.714348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.714546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.714574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.714590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.714602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.714633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.724370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.724547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.724575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.724591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.724604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.724634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.734431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.734583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.734612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.734627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.734640] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.734671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.744494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.744698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.744724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.744741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.744764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.744798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.754514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.754678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.754707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.754723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.754736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.754766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.764523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.764670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.764709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.764724] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.764738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.764769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.774506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.774652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.774681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.774696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.774710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.774740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.784579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.784727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.784756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.784772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.784785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.784816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.794589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.794776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.944 [2024-07-11 23:46:21.794805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.944 [2024-07-11 23:46:21.794821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.944 [2024-07-11 23:46:21.794840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.944 [2024-07-11 23:46:21.794871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.944 qpair failed and we were unable to recover it. 00:33:00.944 [2024-07-11 23:46:21.804616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.944 [2024-07-11 23:46:21.804766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.945 [2024-07-11 23:46:21.804794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.945 [2024-07-11 23:46:21.804809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.945 [2024-07-11 23:46:21.804823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.945 [2024-07-11 23:46:21.804854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.945 qpair failed and we were unable to recover it. 00:33:00.945 [2024-07-11 23:46:21.814657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.945 [2024-07-11 23:46:21.814836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.945 [2024-07-11 23:46:21.814864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.945 [2024-07-11 23:46:21.814880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.945 [2024-07-11 23:46:21.814893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.945 [2024-07-11 23:46:21.814923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.945 qpair failed and we were unable to recover it. 00:33:00.945 [2024-07-11 23:46:21.824641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.945 [2024-07-11 23:46:21.824789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.945 [2024-07-11 23:46:21.824818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.945 [2024-07-11 23:46:21.824833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.945 [2024-07-11 23:46:21.824846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.945 [2024-07-11 23:46:21.824876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.945 qpair failed and we were unable to recover it. 00:33:00.945 [2024-07-11 23:46:21.834723] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.945 [2024-07-11 23:46:21.834891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.945 [2024-07-11 23:46:21.834919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.945 [2024-07-11 23:46:21.834935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.945 [2024-07-11 23:46:21.834948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.945 [2024-07-11 23:46:21.834978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.945 qpair failed and we were unable to recover it. 00:33:00.945 [2024-07-11 23:46:21.844750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.945 [2024-07-11 23:46:21.844919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.945 [2024-07-11 23:46:21.844959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.945 [2024-07-11 23:46:21.844974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.945 [2024-07-11 23:46:21.844987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.945 [2024-07-11 23:46:21.845018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.945 qpair failed and we were unable to recover it. 00:33:00.945 [2024-07-11 23:46:21.854778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.945 [2024-07-11 23:46:21.854974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.945 [2024-07-11 23:46:21.855004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.945 [2024-07-11 23:46:21.855020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.945 [2024-07-11 23:46:21.855033] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.945 [2024-07-11 23:46:21.855063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.945 qpair failed and we were unable to recover it. 00:33:00.945 [2024-07-11 23:46:21.864778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.945 [2024-07-11 23:46:21.864945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.945 [2024-07-11 23:46:21.864974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.945 [2024-07-11 23:46:21.864989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.945 [2024-07-11 23:46:21.865002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.945 [2024-07-11 23:46:21.865033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.945 qpair failed and we were unable to recover it. 00:33:00.945 [2024-07-11 23:46:21.874892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.945 [2024-07-11 23:46:21.875085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.945 [2024-07-11 23:46:21.875114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.945 [2024-07-11 23:46:21.875129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.945 [2024-07-11 23:46:21.875153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.945 [2024-07-11 23:46:21.875185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.945 qpair failed and we were unable to recover it. 00:33:00.945 [2024-07-11 23:46:21.884870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.945 [2024-07-11 23:46:21.885063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.945 [2024-07-11 23:46:21.885091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.945 [2024-07-11 23:46:21.885112] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.945 [2024-07-11 23:46:21.885127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:00.945 [2024-07-11 23:46:21.885167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:00.945 qpair failed and we were unable to recover it. 00:33:01.204 [2024-07-11 23:46:21.894904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.204 [2024-07-11 23:46:21.895089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.204 [2024-07-11 23:46:21.895117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.204 [2024-07-11 23:46:21.895133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.204 [2024-07-11 23:46:21.895156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.204 [2024-07-11 23:46:21.895195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.204 qpair failed and we were unable to recover it. 00:33:01.204 [2024-07-11 23:46:21.904892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.204 [2024-07-11 23:46:21.905041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.204 [2024-07-11 23:46:21.905069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.204 [2024-07-11 23:46:21.905084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.204 [2024-07-11 23:46:21.905098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.204 [2024-07-11 23:46:21.905128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.204 qpair failed and we were unable to recover it. 00:33:01.204 [2024-07-11 23:46:21.914930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.204 [2024-07-11 23:46:21.915083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.204 [2024-07-11 23:46:21.915111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.204 [2024-07-11 23:46:21.915127] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.204 [2024-07-11 23:46:21.915150] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.204 [2024-07-11 23:46:21.915183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.204 qpair failed and we were unable to recover it. 00:33:01.204 [2024-07-11 23:46:21.924944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.204 [2024-07-11 23:46:21.925096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.204 [2024-07-11 23:46:21.925125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.204 [2024-07-11 23:46:21.925152] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.204 [2024-07-11 23:46:21.925168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.204 [2024-07-11 23:46:21.925200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.204 qpair failed and we were unable to recover it. 00:33:01.204 [2024-07-11 23:46:21.934988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.204 [2024-07-11 23:46:21.935192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.204 [2024-07-11 23:46:21.935222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.204 [2024-07-11 23:46:21.935237] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.204 [2024-07-11 23:46:21.935250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.204 [2024-07-11 23:46:21.935281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.204 qpair failed and we were unable to recover it. 00:33:01.204 [2024-07-11 23:46:21.944991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.204 [2024-07-11 23:46:21.945169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.204 [2024-07-11 23:46:21.945199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.204 [2024-07-11 23:46:21.945214] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.204 [2024-07-11 23:46:21.945227] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.204 [2024-07-11 23:46:21.945259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.204 qpair failed and we were unable to recover it. 00:33:01.204 [2024-07-11 23:46:21.955085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.204 [2024-07-11 23:46:21.955250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.204 [2024-07-11 23:46:21.955279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.204 [2024-07-11 23:46:21.955295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.204 [2024-07-11 23:46:21.955308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.204 [2024-07-11 23:46:21.955338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.204 qpair failed and we were unable to recover it. 00:33:01.204 [2024-07-11 23:46:21.965115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.204 [2024-07-11 23:46:21.965277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.204 [2024-07-11 23:46:21.965306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.204 [2024-07-11 23:46:21.965321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.204 [2024-07-11 23:46:21.965334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.204 [2024-07-11 23:46:21.965365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.204 qpair failed and we were unable to recover it. 00:33:01.204 [2024-07-11 23:46:21.975176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.204 [2024-07-11 23:46:21.975340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.204 [2024-07-11 23:46:21.975368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.204 [2024-07-11 23:46:21.975390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.204 [2024-07-11 23:46:21.975403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.204 [2024-07-11 23:46:21.975434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.204 qpair failed and we were unable to recover it. 00:33:01.204 [2024-07-11 23:46:21.985170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.204 [2024-07-11 23:46:21.985343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.204 [2024-07-11 23:46:21.985371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.204 [2024-07-11 23:46:21.985386] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.204 [2024-07-11 23:46:21.985399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.204 [2024-07-11 23:46:21.985440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.204 qpair failed and we were unable to recover it. 00:33:01.204 [2024-07-11 23:46:21.995170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.204 [2024-07-11 23:46:21.995325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.204 [2024-07-11 23:46:21.995353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.204 [2024-07-11 23:46:21.995368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.204 [2024-07-11 23:46:21.995382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.204 [2024-07-11 23:46:21.995412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.204 qpair failed and we were unable to recover it. 00:33:01.204 [2024-07-11 23:46:22.005236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.204 [2024-07-11 23:46:22.005446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.204 [2024-07-11 23:46:22.005474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.204 [2024-07-11 23:46:22.005490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.204 [2024-07-11 23:46:22.005504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.204 [2024-07-11 23:46:22.005535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.204 qpair failed and we were unable to recover it. 00:33:01.204 [2024-07-11 23:46:22.015314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.204 [2024-07-11 23:46:22.015470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.204 [2024-07-11 23:46:22.015499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.204 [2024-07-11 23:46:22.015514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.204 [2024-07-11 23:46:22.015527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.205 [2024-07-11 23:46:22.015558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.205 qpair failed and we were unable to recover it. 00:33:01.205 [2024-07-11 23:46:22.025293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.205 [2024-07-11 23:46:22.025448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.205 [2024-07-11 23:46:22.025477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.205 [2024-07-11 23:46:22.025493] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.205 [2024-07-11 23:46:22.025506] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.205 [2024-07-11 23:46:22.025536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.205 qpair failed and we were unable to recover it. 00:33:01.205 [2024-07-11 23:46:22.035332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.205 [2024-07-11 23:46:22.035494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.205 [2024-07-11 23:46:22.035522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.205 [2024-07-11 23:46:22.035537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.205 [2024-07-11 23:46:22.035551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.205 [2024-07-11 23:46:22.035582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.205 qpair failed and we were unable to recover it. 00:33:01.205 [2024-07-11 23:46:22.045337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.205 [2024-07-11 23:46:22.045486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.205 [2024-07-11 23:46:22.045515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.205 [2024-07-11 23:46:22.045530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.205 [2024-07-11 23:46:22.045543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.205 [2024-07-11 23:46:22.045574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.205 qpair failed and we were unable to recover it. 00:33:01.205 [2024-07-11 23:46:22.055355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.205 [2024-07-11 23:46:22.055520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.205 [2024-07-11 23:46:22.055549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.205 [2024-07-11 23:46:22.055564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.205 [2024-07-11 23:46:22.055577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.205 [2024-07-11 23:46:22.055608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.205 qpair failed and we were unable to recover it. 00:33:01.205 [2024-07-11 23:46:22.065378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.205 [2024-07-11 23:46:22.065558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.205 [2024-07-11 23:46:22.065587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.205 [2024-07-11 23:46:22.065612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.205 [2024-07-11 23:46:22.065626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.205 [2024-07-11 23:46:22.065657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.205 qpair failed and we were unable to recover it. 00:33:01.205 [2024-07-11 23:46:22.075412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.205 [2024-07-11 23:46:22.075569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.205 [2024-07-11 23:46:22.075597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.205 [2024-07-11 23:46:22.075613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.205 [2024-07-11 23:46:22.075626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.205 [2024-07-11 23:46:22.075657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.205 qpair failed and we were unable to recover it. 00:33:01.205 [2024-07-11 23:46:22.085427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.205 [2024-07-11 23:46:22.085599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.205 [2024-07-11 23:46:22.085627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.205 [2024-07-11 23:46:22.085643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.205 [2024-07-11 23:46:22.085656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.205 [2024-07-11 23:46:22.085687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.205 qpair failed and we were unable to recover it. 00:33:01.205 [2024-07-11 23:46:22.095490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.205 [2024-07-11 23:46:22.095638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.205 [2024-07-11 23:46:22.095667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.205 [2024-07-11 23:46:22.095682] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.205 [2024-07-11 23:46:22.095695] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.205 [2024-07-11 23:46:22.095725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.205 qpair failed and we were unable to recover it. 00:33:01.205 [2024-07-11 23:46:22.105466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.205 [2024-07-11 23:46:22.105619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.205 [2024-07-11 23:46:22.105647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.205 [2024-07-11 23:46:22.105662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.205 [2024-07-11 23:46:22.105676] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.205 [2024-07-11 23:46:22.105706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.205 qpair failed and we were unable to recover it. 00:33:01.205 [2024-07-11 23:46:22.115507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.205 [2024-07-11 23:46:22.115685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.205 [2024-07-11 23:46:22.115714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.205 [2024-07-11 23:46:22.115729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.205 [2024-07-11 23:46:22.115741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.205 [2024-07-11 23:46:22.115771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.205 qpair failed and we were unable to recover it. 00:33:01.205 [2024-07-11 23:46:22.125577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.205 [2024-07-11 23:46:22.125725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.205 [2024-07-11 23:46:22.125754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.205 [2024-07-11 23:46:22.125770] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.205 [2024-07-11 23:46:22.125783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.205 [2024-07-11 23:46:22.125814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.205 qpair failed and we were unable to recover it. 00:33:01.205 [2024-07-11 23:46:22.135562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.205 [2024-07-11 23:46:22.135759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.205 [2024-07-11 23:46:22.135788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.205 [2024-07-11 23:46:22.135803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.205 [2024-07-11 23:46:22.135817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.205 [2024-07-11 23:46:22.135848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.205 qpair failed and we were unable to recover it. 00:33:01.205 [2024-07-11 23:46:22.145603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.205 [2024-07-11 23:46:22.145757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.205 [2024-07-11 23:46:22.145787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.205 [2024-07-11 23:46:22.145802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.205 [2024-07-11 23:46:22.145815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.205 [2024-07-11 23:46:22.145847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.205 qpair failed and we were unable to recover it. 00:33:01.464 [2024-07-11 23:46:22.155632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.464 [2024-07-11 23:46:22.155790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.464 [2024-07-11 23:46:22.155819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.464 [2024-07-11 23:46:22.155841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.464 [2024-07-11 23:46:22.155855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.464 [2024-07-11 23:46:22.155886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.464 qpair failed and we were unable to recover it. 00:33:01.464 [2024-07-11 23:46:22.165668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.464 [2024-07-11 23:46:22.165822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.464 [2024-07-11 23:46:22.165850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.464 [2024-07-11 23:46:22.165866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.464 [2024-07-11 23:46:22.165879] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.464 [2024-07-11 23:46:22.165910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.464 qpair failed and we were unable to recover it. 00:33:01.464 [2024-07-11 23:46:22.175685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.465 [2024-07-11 23:46:22.175863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.465 [2024-07-11 23:46:22.175891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.465 [2024-07-11 23:46:22.175907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.465 [2024-07-11 23:46:22.175919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.465 [2024-07-11 23:46:22.175950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.465 qpair failed and we were unable to recover it. 00:33:01.465 [2024-07-11 23:46:22.185727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.465 [2024-07-11 23:46:22.185876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.465 [2024-07-11 23:46:22.185904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.465 [2024-07-11 23:46:22.185919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.465 [2024-07-11 23:46:22.185932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.465 [2024-07-11 23:46:22.185963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.465 qpair failed and we were unable to recover it. 00:33:01.465 [2024-07-11 23:46:22.195844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.465 [2024-07-11 23:46:22.196001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.465 [2024-07-11 23:46:22.196029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.465 [2024-07-11 23:46:22.196046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.465 [2024-07-11 23:46:22.196059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.465 [2024-07-11 23:46:22.196089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.465 qpair failed and we were unable to recover it. 00:33:01.465 [2024-07-11 23:46:22.205857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.465 [2024-07-11 23:46:22.206012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.465 [2024-07-11 23:46:22.206041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.465 [2024-07-11 23:46:22.206057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.465 [2024-07-11 23:46:22.206070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.465 [2024-07-11 23:46:22.206101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.465 qpair failed and we were unable to recover it. 00:33:01.465 [2024-07-11 23:46:22.215810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.465 [2024-07-11 23:46:22.215960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.465 [2024-07-11 23:46:22.215988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.465 [2024-07-11 23:46:22.216004] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.465 [2024-07-11 23:46:22.216017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.465 [2024-07-11 23:46:22.216047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.465 qpair failed and we were unable to recover it. 00:33:01.465 [2024-07-11 23:46:22.225841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.465 [2024-07-11 23:46:22.225990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.465 [2024-07-11 23:46:22.226019] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.465 [2024-07-11 23:46:22.226034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.465 [2024-07-11 23:46:22.226047] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.465 [2024-07-11 23:46:22.226078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.465 qpair failed and we were unable to recover it. 00:33:01.465 [2024-07-11 23:46:22.235876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.465 [2024-07-11 23:46:22.236036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.465 [2024-07-11 23:46:22.236064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.465 [2024-07-11 23:46:22.236080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.465 [2024-07-11 23:46:22.236093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.465 [2024-07-11 23:46:22.236124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.465 qpair failed and we were unable to recover it. 00:33:01.465 [2024-07-11 23:46:22.245892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.465 [2024-07-11 23:46:22.246047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.465 [2024-07-11 23:46:22.246081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.465 [2024-07-11 23:46:22.246098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.465 [2024-07-11 23:46:22.246111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.465 [2024-07-11 23:46:22.246157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.465 qpair failed and we were unable to recover it. 00:33:01.465 [2024-07-11 23:46:22.255931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.465 [2024-07-11 23:46:22.256088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.465 [2024-07-11 23:46:22.256117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.465 [2024-07-11 23:46:22.256132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.465 [2024-07-11 23:46:22.256162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.465 [2024-07-11 23:46:22.256199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.465 qpair failed and we were unable to recover it. 00:33:01.465 [2024-07-11 23:46:22.265930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.465 [2024-07-11 23:46:22.266086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.465 [2024-07-11 23:46:22.266115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.465 [2024-07-11 23:46:22.266130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.465 [2024-07-11 23:46:22.266158] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.465 [2024-07-11 23:46:22.266190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.465 qpair failed and we were unable to recover it. 00:33:01.465 [2024-07-11 23:46:22.275979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.465 [2024-07-11 23:46:22.276136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.466 [2024-07-11 23:46:22.276174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.466 [2024-07-11 23:46:22.276190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.466 [2024-07-11 23:46:22.276203] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.466 [2024-07-11 23:46:22.276234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.466 qpair failed and we were unable to recover it. 00:33:01.466 [2024-07-11 23:46:22.285998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.466 [2024-07-11 23:46:22.286165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.466 [2024-07-11 23:46:22.286195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.466 [2024-07-11 23:46:22.286210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.466 [2024-07-11 23:46:22.286223] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.466 [2024-07-11 23:46:22.286254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.466 qpair failed and we were unable to recover it. 00:33:01.466 [2024-07-11 23:46:22.296121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.466 [2024-07-11 23:46:22.296279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.466 [2024-07-11 23:46:22.296308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.466 [2024-07-11 23:46:22.296324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.466 [2024-07-11 23:46:22.296337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.466 [2024-07-11 23:46:22.296367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.466 qpair failed and we were unable to recover it. 00:33:01.466 [2024-07-11 23:46:22.306056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.466 [2024-07-11 23:46:22.306216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.466 [2024-07-11 23:46:22.306244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.466 [2024-07-11 23:46:22.306260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.466 [2024-07-11 23:46:22.306272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.466 [2024-07-11 23:46:22.306303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.466 qpair failed and we were unable to recover it. 00:33:01.466 [2024-07-11 23:46:22.316126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.466 [2024-07-11 23:46:22.316288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.466 [2024-07-11 23:46:22.316317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.466 [2024-07-11 23:46:22.316332] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.466 [2024-07-11 23:46:22.316345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.466 [2024-07-11 23:46:22.316375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.466 qpair failed and we were unable to recover it. 00:33:01.466 [2024-07-11 23:46:22.326109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.466 [2024-07-11 23:46:22.326268] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.466 [2024-07-11 23:46:22.326297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.466 [2024-07-11 23:46:22.326313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.466 [2024-07-11 23:46:22.326326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.466 [2024-07-11 23:46:22.326356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.466 qpair failed and we were unable to recover it. 00:33:01.466 [2024-07-11 23:46:22.336182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.466 [2024-07-11 23:46:22.336363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.466 [2024-07-11 23:46:22.336396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.466 [2024-07-11 23:46:22.336413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.466 [2024-07-11 23:46:22.336426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.466 [2024-07-11 23:46:22.336457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.466 qpair failed and we were unable to recover it. 00:33:01.466 [2024-07-11 23:46:22.346187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.466 [2024-07-11 23:46:22.346343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.466 [2024-07-11 23:46:22.346371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.466 [2024-07-11 23:46:22.346386] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.466 [2024-07-11 23:46:22.346399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.466 [2024-07-11 23:46:22.346430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.466 qpair failed and we were unable to recover it. 00:33:01.466 [2024-07-11 23:46:22.356232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.466 [2024-07-11 23:46:22.356387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.466 [2024-07-11 23:46:22.356416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.466 [2024-07-11 23:46:22.356431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.466 [2024-07-11 23:46:22.356444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.466 [2024-07-11 23:46:22.356474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.466 qpair failed and we were unable to recover it. 00:33:01.466 [2024-07-11 23:46:22.366270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.466 [2024-07-11 23:46:22.366421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.466 [2024-07-11 23:46:22.366449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.466 [2024-07-11 23:46:22.366464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.466 [2024-07-11 23:46:22.366477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.466 [2024-07-11 23:46:22.366508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.466 qpair failed and we were unable to recover it. 00:33:01.467 [2024-07-11 23:46:22.376272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.467 [2024-07-11 23:46:22.376421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.467 [2024-07-11 23:46:22.376449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.467 [2024-07-11 23:46:22.376465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.467 [2024-07-11 23:46:22.376478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.467 [2024-07-11 23:46:22.376513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.467 qpair failed and we were unable to recover it. 00:33:01.467 [2024-07-11 23:46:22.386417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.467 [2024-07-11 23:46:22.386578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.467 [2024-07-11 23:46:22.386608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.467 [2024-07-11 23:46:22.386623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.467 [2024-07-11 23:46:22.386636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.467 [2024-07-11 23:46:22.386667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.467 qpair failed and we were unable to recover it. 00:33:01.467 [2024-07-11 23:46:22.396397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.467 [2024-07-11 23:46:22.396555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.467 [2024-07-11 23:46:22.396583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.467 [2024-07-11 23:46:22.396599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.467 [2024-07-11 23:46:22.396612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.467 [2024-07-11 23:46:22.396642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.467 qpair failed and we were unable to recover it. 00:33:01.467 [2024-07-11 23:46:22.406401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.467 [2024-07-11 23:46:22.406605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.467 [2024-07-11 23:46:22.406633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.467 [2024-07-11 23:46:22.406648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.467 [2024-07-11 23:46:22.406661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.467 [2024-07-11 23:46:22.406692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.467 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-11 23:46:22.416402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.725 [2024-07-11 23:46:22.416561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.725 [2024-07-11 23:46:22.416590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.725 [2024-07-11 23:46:22.416605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.725 [2024-07-11 23:46:22.416618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.725 [2024-07-11 23:46:22.416649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-11 23:46:22.426454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.725 [2024-07-11 23:46:22.426619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.725 [2024-07-11 23:46:22.426653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.725 [2024-07-11 23:46:22.426670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.725 [2024-07-11 23:46:22.426683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.725 [2024-07-11 23:46:22.426714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-11 23:46:22.436476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.725 [2024-07-11 23:46:22.436628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.725 [2024-07-11 23:46:22.436656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.725 [2024-07-11 23:46:22.436672] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.725 [2024-07-11 23:46:22.436685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.725 [2024-07-11 23:46:22.436715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-11 23:46:22.446487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.725 [2024-07-11 23:46:22.446645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.725 [2024-07-11 23:46:22.446673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.725 [2024-07-11 23:46:22.446689] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.725 [2024-07-11 23:46:22.446702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.725 [2024-07-11 23:46:22.446732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-11 23:46:22.456509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.725 [2024-07-11 23:46:22.456661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.725 [2024-07-11 23:46:22.456690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.725 [2024-07-11 23:46:22.456705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.725 [2024-07-11 23:46:22.456718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.725 [2024-07-11 23:46:22.456748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-11 23:46:22.466537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.725 [2024-07-11 23:46:22.466689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.725 [2024-07-11 23:46:22.466716] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.725 [2024-07-11 23:46:22.466731] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.725 [2024-07-11 23:46:22.466744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.725 [2024-07-11 23:46:22.466780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-11 23:46:22.476569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.725 [2024-07-11 23:46:22.476751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.725 [2024-07-11 23:46:22.476779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.476794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.476807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.476838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.486591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.486790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.486818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.486833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.486846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.486876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.496621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.496771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.496799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.496815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.496828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.496858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.506762] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.506954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.506982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.506996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.507009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.507040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.516698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.516854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.516887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.516903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.516916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.516947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.526712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.526868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.526895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.526910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.526923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.526954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.536866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.537043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.537071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.537085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.537099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.537129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.546809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.546958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.546987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.547002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.547015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.547046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.556817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.556971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.556999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.557015] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.557028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.557064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.566824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.567004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.567031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.567046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.567060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.567089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.576831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.577027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.577055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.577071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.577083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.577113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.586888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.587041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.587068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.587083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.587096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.587125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.596926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.597114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.597150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.597168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.597181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.597211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.606940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.607108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.607148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.607167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.607180] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.607210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.616971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.617124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.617160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.617177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.617190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.617220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.626981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.627162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.627190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.627206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.627219] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.627249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.637068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.637254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.637282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.637297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.637310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.637340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.647084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.647245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.647273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.647289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.647301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.647337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.657100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.657300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.657329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.657344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.657357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.657387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-11 23:46:22.667144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.726 [2024-07-11 23:46:22.667292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.726 [2024-07-11 23:46:22.667320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.726 [2024-07-11 23:46:22.667335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.726 [2024-07-11 23:46:22.667348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.726 [2024-07-11 23:46:22.667378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.985 [2024-07-11 23:46:22.677133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.985 [2024-07-11 23:46:22.677298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.985 [2024-07-11 23:46:22.677326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.985 [2024-07-11 23:46:22.677341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.985 [2024-07-11 23:46:22.677354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.985 [2024-07-11 23:46:22.677384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.985 qpair failed and we were unable to recover it. 00:33:01.985 [2024-07-11 23:46:22.687155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.985 [2024-07-11 23:46:22.687305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.985 [2024-07-11 23:46:22.687334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.985 [2024-07-11 23:46:22.687349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.985 [2024-07-11 23:46:22.687362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.985 [2024-07-11 23:46:22.687392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.985 qpair failed and we were unable to recover it. 00:33:01.985 [2024-07-11 23:46:22.697184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.985 [2024-07-11 23:46:22.697370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.985 [2024-07-11 23:46:22.697404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.985 [2024-07-11 23:46:22.697420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.985 [2024-07-11 23:46:22.697433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.985 [2024-07-11 23:46:22.697463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.985 qpair failed and we were unable to recover it. 00:33:01.985 [2024-07-11 23:46:22.707205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.985 [2024-07-11 23:46:22.707361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.985 [2024-07-11 23:46:22.707388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.985 [2024-07-11 23:46:22.707403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.985 [2024-07-11 23:46:22.707416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.985 [2024-07-11 23:46:22.707447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.985 qpair failed and we were unable to recover it. 00:33:01.985 [2024-07-11 23:46:22.717250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.985 [2024-07-11 23:46:22.717406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.985 [2024-07-11 23:46:22.717433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.985 [2024-07-11 23:46:22.717448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.985 [2024-07-11 23:46:22.717462] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.985 [2024-07-11 23:46:22.717492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.985 qpair failed and we were unable to recover it. 00:33:01.985 [2024-07-11 23:46:22.727278] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.985 [2024-07-11 23:46:22.727463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.985 [2024-07-11 23:46:22.727490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.985 [2024-07-11 23:46:22.727505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.985 [2024-07-11 23:46:22.727518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.985 [2024-07-11 23:46:22.727549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.985 qpair failed and we were unable to recover it. 00:33:01.985 [2024-07-11 23:46:22.737302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.985 [2024-07-11 23:46:22.737469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.985 [2024-07-11 23:46:22.737497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.985 [2024-07-11 23:46:22.737512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.985 [2024-07-11 23:46:22.737534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.986 [2024-07-11 23:46:22.737565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.986 qpair failed and we were unable to recover it. 00:33:01.986 [2024-07-11 23:46:22.747331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.986 [2024-07-11 23:46:22.747513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.986 [2024-07-11 23:46:22.747540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.986 [2024-07-11 23:46:22.747555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.986 [2024-07-11 23:46:22.747568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.986 [2024-07-11 23:46:22.747599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.986 qpair failed and we were unable to recover it. 00:33:01.986 [2024-07-11 23:46:22.757361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.986 [2024-07-11 23:46:22.757522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.986 [2024-07-11 23:46:22.757550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.986 [2024-07-11 23:46:22.757565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.986 [2024-07-11 23:46:22.757578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.986 [2024-07-11 23:46:22.757609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.986 qpair failed and we were unable to recover it. 00:33:01.986 [2024-07-11 23:46:22.767420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.986 [2024-07-11 23:46:22.767573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.986 [2024-07-11 23:46:22.767601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.986 [2024-07-11 23:46:22.767616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.986 [2024-07-11 23:46:22.767629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.986 [2024-07-11 23:46:22.767660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.986 qpair failed and we were unable to recover it. 00:33:01.986 [2024-07-11 23:46:22.777466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.986 [2024-07-11 23:46:22.777653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.986 [2024-07-11 23:46:22.777681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.986 [2024-07-11 23:46:22.777696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.986 [2024-07-11 23:46:22.777708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.986 [2024-07-11 23:46:22.777739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.986 qpair failed and we were unable to recover it. 00:33:01.986 [2024-07-11 23:46:22.787480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.986 [2024-07-11 23:46:22.787657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.986 [2024-07-11 23:46:22.787685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.986 [2024-07-11 23:46:22.787700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.986 [2024-07-11 23:46:22.787713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.986 [2024-07-11 23:46:22.787744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.986 qpair failed and we were unable to recover it. 00:33:01.986 [2024-07-11 23:46:22.797512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.986 [2024-07-11 23:46:22.797712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.986 [2024-07-11 23:46:22.797740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.986 [2024-07-11 23:46:22.797755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.986 [2024-07-11 23:46:22.797768] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.986 [2024-07-11 23:46:22.797798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.986 qpair failed and we were unable to recover it. 00:33:01.986 [2024-07-11 23:46:22.807503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.986 [2024-07-11 23:46:22.807653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.986 [2024-07-11 23:46:22.807680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.986 [2024-07-11 23:46:22.807695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.986 [2024-07-11 23:46:22.807708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.986 [2024-07-11 23:46:22.807739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.986 qpair failed and we were unable to recover it. 00:33:01.986 [2024-07-11 23:46:22.817553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.986 [2024-07-11 23:46:22.817705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.986 [2024-07-11 23:46:22.817733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.986 [2024-07-11 23:46:22.817748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.986 [2024-07-11 23:46:22.817761] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.986 [2024-07-11 23:46:22.817791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.986 qpair failed and we were unable to recover it. 00:33:01.986 [2024-07-11 23:46:22.827575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.986 [2024-07-11 23:46:22.827754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.986 [2024-07-11 23:46:22.827781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.986 [2024-07-11 23:46:22.827796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.986 [2024-07-11 23:46:22.827815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.986 [2024-07-11 23:46:22.827846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.986 qpair failed and we were unable to recover it. 00:33:01.986 [2024-07-11 23:46:22.837599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.986 [2024-07-11 23:46:22.837760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.986 [2024-07-11 23:46:22.837788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.986 [2024-07-11 23:46:22.837802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.986 [2024-07-11 23:46:22.837815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.986 [2024-07-11 23:46:22.837846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.986 qpair failed and we were unable to recover it. 00:33:01.986 [2024-07-11 23:46:22.847615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.986 [2024-07-11 23:46:22.847767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.986 [2024-07-11 23:46:22.847796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.986 [2024-07-11 23:46:22.847811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.986 [2024-07-11 23:46:22.847825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.986 [2024-07-11 23:46:22.847855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.986 qpair failed and we were unable to recover it. 00:33:01.986 [2024-07-11 23:46:22.857653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.986 [2024-07-11 23:46:22.857805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.986 [2024-07-11 23:46:22.857832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.986 [2024-07-11 23:46:22.857848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.987 [2024-07-11 23:46:22.857861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.987 [2024-07-11 23:46:22.857891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.987 qpair failed and we were unable to recover it. 00:33:01.987 [2024-07-11 23:46:22.867746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.987 [2024-07-11 23:46:22.867895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.987 [2024-07-11 23:46:22.867923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.987 [2024-07-11 23:46:22.867938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.987 [2024-07-11 23:46:22.867951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.987 [2024-07-11 23:46:22.867982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.987 qpair failed and we were unable to recover it. 00:33:01.987 [2024-07-11 23:46:22.877702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.987 [2024-07-11 23:46:22.877863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.987 [2024-07-11 23:46:22.877892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.987 [2024-07-11 23:46:22.877907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.987 [2024-07-11 23:46:22.877920] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.987 [2024-07-11 23:46:22.877951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.987 qpair failed and we were unable to recover it. 00:33:01.987 [2024-07-11 23:46:22.887727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.987 [2024-07-11 23:46:22.887883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.987 [2024-07-11 23:46:22.887910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.987 [2024-07-11 23:46:22.887926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.987 [2024-07-11 23:46:22.887938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.987 [2024-07-11 23:46:22.887968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.987 qpair failed and we were unable to recover it. 00:33:01.987 [2024-07-11 23:46:22.897787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.987 [2024-07-11 23:46:22.897943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.987 [2024-07-11 23:46:22.897971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.987 [2024-07-11 23:46:22.897986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.987 [2024-07-11 23:46:22.897999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.987 [2024-07-11 23:46:22.898030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.987 qpair failed and we were unable to recover it. 00:33:01.987 [2024-07-11 23:46:22.907770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.987 [2024-07-11 23:46:22.907920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.987 [2024-07-11 23:46:22.907948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.987 [2024-07-11 23:46:22.907963] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.987 [2024-07-11 23:46:22.907976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.987 [2024-07-11 23:46:22.908007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.987 qpair failed and we were unable to recover it. 00:33:01.987 [2024-07-11 23:46:22.917830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.987 [2024-07-11 23:46:22.917990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.987 [2024-07-11 23:46:22.918018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.987 [2024-07-11 23:46:22.918034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.987 [2024-07-11 23:46:22.918054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.987 [2024-07-11 23:46:22.918085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.987 qpair failed and we were unable to recover it. 00:33:01.987 [2024-07-11 23:46:22.927846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.987 [2024-07-11 23:46:22.927996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.987 [2024-07-11 23:46:22.928024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.987 [2024-07-11 23:46:22.928039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.987 [2024-07-11 23:46:22.928052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:01.987 [2024-07-11 23:46:22.928082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.987 qpair failed and we were unable to recover it. 00:33:02.250 [2024-07-11 23:46:22.937871] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.250 [2024-07-11 23:46:22.938022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.250 [2024-07-11 23:46:22.938049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.250 [2024-07-11 23:46:22.938065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.250 [2024-07-11 23:46:22.938077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.250 [2024-07-11 23:46:22.938108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.250 qpair failed and we were unable to recover it. 00:33:02.250 [2024-07-11 23:46:22.947995] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.250 [2024-07-11 23:46:22.948188] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.250 [2024-07-11 23:46:22.948217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.250 [2024-07-11 23:46:22.948232] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.250 [2024-07-11 23:46:22.948245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.250 [2024-07-11 23:46:22.948275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.250 qpair failed and we were unable to recover it. 00:33:02.250 [2024-07-11 23:46:22.958060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.250 [2024-07-11 23:46:22.958222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.250 [2024-07-11 23:46:22.958251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.250 [2024-07-11 23:46:22.958266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.250 [2024-07-11 23:46:22.958279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.250 [2024-07-11 23:46:22.958310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.250 qpair failed and we were unable to recover it. 00:33:02.250 [2024-07-11 23:46:22.968003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.250 [2024-07-11 23:46:22.968210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.250 [2024-07-11 23:46:22.968239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.250 [2024-07-11 23:46:22.968254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.250 [2024-07-11 23:46:22.968267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.250 [2024-07-11 23:46:22.968298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.250 qpair failed and we were unable to recover it. 00:33:02.250 [2024-07-11 23:46:22.978027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.250 [2024-07-11 23:46:22.978204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.250 [2024-07-11 23:46:22.978232] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.250 [2024-07-11 23:46:22.978248] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.250 [2024-07-11 23:46:22.978261] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.250 [2024-07-11 23:46:22.978291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.250 qpair failed and we were unable to recover it. 00:33:02.250 [2024-07-11 23:46:22.988040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.250 [2024-07-11 23:46:22.988192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.250 [2024-07-11 23:46:22.988220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.250 [2024-07-11 23:46:22.988236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.250 [2024-07-11 23:46:22.988249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.250 [2024-07-11 23:46:22.988280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.250 qpair failed and we were unable to recover it. 00:33:02.250 [2024-07-11 23:46:22.998079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.250 [2024-07-11 23:46:22.998236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.250 [2024-07-11 23:46:22.998264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.250 [2024-07-11 23:46:22.998279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.250 [2024-07-11 23:46:22.998292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.250 [2024-07-11 23:46:22.998322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.250 qpair failed and we were unable to recover it. 00:33:02.250 [2024-07-11 23:46:23.008119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.250 [2024-07-11 23:46:23.008318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.250 [2024-07-11 23:46:23.008346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.250 [2024-07-11 23:46:23.008361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.250 [2024-07-11 23:46:23.008380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.250 [2024-07-11 23:46:23.008412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.250 qpair failed and we were unable to recover it. 00:33:02.250 [2024-07-11 23:46:23.018107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.250 [2024-07-11 23:46:23.018265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.250 [2024-07-11 23:46:23.018294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.250 [2024-07-11 23:46:23.018310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.250 [2024-07-11 23:46:23.018323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.250 [2024-07-11 23:46:23.018354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.028189] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.251 [2024-07-11 23:46:23.028338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.251 [2024-07-11 23:46:23.028366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.251 [2024-07-11 23:46:23.028381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.251 [2024-07-11 23:46:23.028394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.251 [2024-07-11 23:46:23.028425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.038180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.251 [2024-07-11 23:46:23.038338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.251 [2024-07-11 23:46:23.038367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.251 [2024-07-11 23:46:23.038383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.251 [2024-07-11 23:46:23.038396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.251 [2024-07-11 23:46:23.038426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.048236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.251 [2024-07-11 23:46:23.048391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.251 [2024-07-11 23:46:23.048420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.251 [2024-07-11 23:46:23.048436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.251 [2024-07-11 23:46:23.048449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.251 [2024-07-11 23:46:23.048478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.058231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.251 [2024-07-11 23:46:23.058391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.251 [2024-07-11 23:46:23.058419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.251 [2024-07-11 23:46:23.058435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.251 [2024-07-11 23:46:23.058448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.251 [2024-07-11 23:46:23.058477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.068272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.251 [2024-07-11 23:46:23.068421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.251 [2024-07-11 23:46:23.068447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.251 [2024-07-11 23:46:23.068463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.251 [2024-07-11 23:46:23.068476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.251 [2024-07-11 23:46:23.068506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.078312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.251 [2024-07-11 23:46:23.078480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.251 [2024-07-11 23:46:23.078508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.251 [2024-07-11 23:46:23.078523] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.251 [2024-07-11 23:46:23.078536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.251 [2024-07-11 23:46:23.078566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.088333] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.251 [2024-07-11 23:46:23.088482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.251 [2024-07-11 23:46:23.088509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.251 [2024-07-11 23:46:23.088524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.251 [2024-07-11 23:46:23.088537] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.251 [2024-07-11 23:46:23.088567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.098365] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.251 [2024-07-11 23:46:23.098513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.251 [2024-07-11 23:46:23.098541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.251 [2024-07-11 23:46:23.098561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.251 [2024-07-11 23:46:23.098575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.251 [2024-07-11 23:46:23.098605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.108367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.251 [2024-07-11 23:46:23.108577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.251 [2024-07-11 23:46:23.108604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.251 [2024-07-11 23:46:23.108620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.251 [2024-07-11 23:46:23.108633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.251 [2024-07-11 23:46:23.108663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.118403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.251 [2024-07-11 23:46:23.118605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.251 [2024-07-11 23:46:23.118633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.251 [2024-07-11 23:46:23.118648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.251 [2024-07-11 23:46:23.118661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.251 [2024-07-11 23:46:23.118691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.128442] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.251 [2024-07-11 23:46:23.128595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.251 [2024-07-11 23:46:23.128622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.251 [2024-07-11 23:46:23.128637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.251 [2024-07-11 23:46:23.128650] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.251 [2024-07-11 23:46:23.128680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.138440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.251 [2024-07-11 23:46:23.138606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.251 [2024-07-11 23:46:23.138634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.251 [2024-07-11 23:46:23.138648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.251 [2024-07-11 23:46:23.138661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.251 [2024-07-11 23:46:23.138692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.148514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.251 [2024-07-11 23:46:23.148706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.251 [2024-07-11 23:46:23.148735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.251 [2024-07-11 23:46:23.148750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.251 [2024-07-11 23:46:23.148763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.251 [2024-07-11 23:46:23.148793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.158521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.251 [2024-07-11 23:46:23.158674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.251 [2024-07-11 23:46:23.158702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.251 [2024-07-11 23:46:23.158718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.251 [2024-07-11 23:46:23.158731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.251 [2024-07-11 23:46:23.158761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.251 qpair failed and we were unable to recover it. 00:33:02.251 [2024-07-11 23:46:23.168594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.252 [2024-07-11 23:46:23.168783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.252 [2024-07-11 23:46:23.168816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.252 [2024-07-11 23:46:23.168831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.252 [2024-07-11 23:46:23.168844] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.252 [2024-07-11 23:46:23.168874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.252 qpair failed and we were unable to recover it. 00:33:02.252 [2024-07-11 23:46:23.178617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.252 [2024-07-11 23:46:23.178765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.252 [2024-07-11 23:46:23.178792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.252 [2024-07-11 23:46:23.178807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.252 [2024-07-11 23:46:23.178820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.252 [2024-07-11 23:46:23.178850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.252 qpair failed and we were unable to recover it. 00:33:02.252 [2024-07-11 23:46:23.188600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.252 [2024-07-11 23:46:23.188755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.252 [2024-07-11 23:46:23.188783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.252 [2024-07-11 23:46:23.188805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.252 [2024-07-11 23:46:23.188819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.252 [2024-07-11 23:46:23.188849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.252 qpair failed and we were unable to recover it. 00:33:02.252 [2024-07-11 23:46:23.198695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.513 [2024-07-11 23:46:23.198860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.513 [2024-07-11 23:46:23.198890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.513 [2024-07-11 23:46:23.198907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.513 [2024-07-11 23:46:23.198922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.513 [2024-07-11 23:46:23.198959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.513 qpair failed and we were unable to recover it. 00:33:02.513 [2024-07-11 23:46:23.208656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.513 [2024-07-11 23:46:23.208807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.513 [2024-07-11 23:46:23.208834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.513 [2024-07-11 23:46:23.208849] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.513 [2024-07-11 23:46:23.208862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.513 [2024-07-11 23:46:23.208892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.513 qpair failed and we were unable to recover it. 00:33:02.513 [2024-07-11 23:46:23.218684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.513 [2024-07-11 23:46:23.218909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.513 [2024-07-11 23:46:23.218936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.513 [2024-07-11 23:46:23.218952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.513 [2024-07-11 23:46:23.218965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.513 [2024-07-11 23:46:23.218995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.513 qpair failed and we were unable to recover it. 00:33:02.513 [2024-07-11 23:46:23.228787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.513 [2024-07-11 23:46:23.228935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.513 [2024-07-11 23:46:23.228962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.513 [2024-07-11 23:46:23.228977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.513 [2024-07-11 23:46:23.228990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.513 [2024-07-11 23:46:23.229021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.513 qpair failed and we were unable to recover it. 00:33:02.513 [2024-07-11 23:46:23.238803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.513 [2024-07-11 23:46:23.238985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.513 [2024-07-11 23:46:23.239012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.513 [2024-07-11 23:46:23.239028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.513 [2024-07-11 23:46:23.239041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.513 [2024-07-11 23:46:23.239071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.513 qpair failed and we were unable to recover it. 00:33:02.513 [2024-07-11 23:46:23.248800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.513 [2024-07-11 23:46:23.248950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.513 [2024-07-11 23:46:23.248978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.513 [2024-07-11 23:46:23.248994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.513 [2024-07-11 23:46:23.249007] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.513 [2024-07-11 23:46:23.249038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.513 qpair failed and we were unable to recover it. 00:33:02.513 [2024-07-11 23:46:23.258780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.513 [2024-07-11 23:46:23.258955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.513 [2024-07-11 23:46:23.258983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.513 [2024-07-11 23:46:23.258998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.513 [2024-07-11 23:46:23.259011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.513 [2024-07-11 23:46:23.259041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.513 qpair failed and we were unable to recover it. 00:33:02.513 [2024-07-11 23:46:23.268837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.513 [2024-07-11 23:46:23.268982] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.513 [2024-07-11 23:46:23.269010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.513 [2024-07-11 23:46:23.269025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.513 [2024-07-11 23:46:23.269037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.513 [2024-07-11 23:46:23.269068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.513 qpair failed and we were unable to recover it. 00:33:02.513 [2024-07-11 23:46:23.278915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.513 [2024-07-11 23:46:23.279087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.513 [2024-07-11 23:46:23.279115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.513 [2024-07-11 23:46:23.279147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.513 [2024-07-11 23:46:23.279163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.513 [2024-07-11 23:46:23.279195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.513 qpair failed and we were unable to recover it. 00:33:02.513 [2024-07-11 23:46:23.288936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.513 [2024-07-11 23:46:23.289129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.513 [2024-07-11 23:46:23.289166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.513 [2024-07-11 23:46:23.289182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.513 [2024-07-11 23:46:23.289195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.513 [2024-07-11 23:46:23.289226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.513 qpair failed and we were unable to recover it. 00:33:02.513 [2024-07-11 23:46:23.298962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.513 [2024-07-11 23:46:23.299145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.513 [2024-07-11 23:46:23.299174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.513 [2024-07-11 23:46:23.299190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.513 [2024-07-11 23:46:23.299203] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.513 [2024-07-11 23:46:23.299233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.513 qpair failed and we were unable to recover it. 00:33:02.513 [2024-07-11 23:46:23.308983] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.513 [2024-07-11 23:46:23.309177] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.513 [2024-07-11 23:46:23.309206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.513 [2024-07-11 23:46:23.309221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.513 [2024-07-11 23:46:23.309234] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.513 [2024-07-11 23:46:23.309266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.514 qpair failed and we were unable to recover it. 00:33:02.514 [2024-07-11 23:46:23.318998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.514 [2024-07-11 23:46:23.319187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.514 [2024-07-11 23:46:23.319216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.514 [2024-07-11 23:46:23.319232] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.514 [2024-07-11 23:46:23.319245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.514 [2024-07-11 23:46:23.319276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.514 qpair failed and we were unable to recover it. 00:33:02.514 [2024-07-11 23:46:23.329041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.514 [2024-07-11 23:46:23.329229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.514 [2024-07-11 23:46:23.329257] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.514 [2024-07-11 23:46:23.329272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.514 [2024-07-11 23:46:23.329285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.514 [2024-07-11 23:46:23.329317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.514 qpair failed and we were unable to recover it. 00:33:02.514 [2024-07-11 23:46:23.339098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.514 [2024-07-11 23:46:23.339261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.514 [2024-07-11 23:46:23.339290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.514 [2024-07-11 23:46:23.339304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.514 [2024-07-11 23:46:23.339318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.514 [2024-07-11 23:46:23.339348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.514 qpair failed and we were unable to recover it. 00:33:02.514 [2024-07-11 23:46:23.349150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.514 [2024-07-11 23:46:23.349339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.514 [2024-07-11 23:46:23.349368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.514 [2024-07-11 23:46:23.349384] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.514 [2024-07-11 23:46:23.349397] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.514 [2024-07-11 23:46:23.349427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.514 qpair failed and we were unable to recover it. 00:33:02.514 [2024-07-11 23:46:23.359182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.514 [2024-07-11 23:46:23.359382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.514 [2024-07-11 23:46:23.359410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.514 [2024-07-11 23:46:23.359426] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.514 [2024-07-11 23:46:23.359439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.514 [2024-07-11 23:46:23.359469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.514 qpair failed and we were unable to recover it. 00:33:02.514 [2024-07-11 23:46:23.369143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.514 [2024-07-11 23:46:23.369293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.514 [2024-07-11 23:46:23.369320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.514 [2024-07-11 23:46:23.369342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.514 [2024-07-11 23:46:23.369356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.514 [2024-07-11 23:46:23.369387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.514 qpair failed and we were unable to recover it. 00:33:02.514 [2024-07-11 23:46:23.379173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.514 [2024-07-11 23:46:23.379323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.514 [2024-07-11 23:46:23.379352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.514 [2024-07-11 23:46:23.379367] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.514 [2024-07-11 23:46:23.379380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.514 [2024-07-11 23:46:23.379410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.514 qpair failed and we were unable to recover it. 00:33:02.514 [2024-07-11 23:46:23.389230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.514 [2024-07-11 23:46:23.389403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.514 [2024-07-11 23:46:23.389430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.514 [2024-07-11 23:46:23.389445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.514 [2024-07-11 23:46:23.389458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.514 [2024-07-11 23:46:23.389489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.514 qpair failed and we were unable to recover it. 00:33:02.514 [2024-07-11 23:46:23.399318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.514 [2024-07-11 23:46:23.399503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.514 [2024-07-11 23:46:23.399531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.514 [2024-07-11 23:46:23.399546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.514 [2024-07-11 23:46:23.399559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.514 [2024-07-11 23:46:23.399590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.514 qpair failed and we were unable to recover it. 00:33:02.514 [2024-07-11 23:46:23.409282] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.514 [2024-07-11 23:46:23.409453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.514 [2024-07-11 23:46:23.409481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.514 [2024-07-11 23:46:23.409496] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.514 [2024-07-11 23:46:23.409509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.514 [2024-07-11 23:46:23.409540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.514 qpair failed and we were unable to recover it. 00:33:02.514 [2024-07-11 23:46:23.419297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.514 [2024-07-11 23:46:23.419470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.514 [2024-07-11 23:46:23.419498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.514 [2024-07-11 23:46:23.419514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.514 [2024-07-11 23:46:23.419526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.514 [2024-07-11 23:46:23.419556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.514 qpair failed and we were unable to recover it. 00:33:02.514 [2024-07-11 23:46:23.429400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.514 [2024-07-11 23:46:23.429544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.514 [2024-07-11 23:46:23.429571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.514 [2024-07-11 23:46:23.429587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.514 [2024-07-11 23:46:23.429600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.514 [2024-07-11 23:46:23.429631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.514 qpair failed and we were unable to recover it. 00:33:02.514 [2024-07-11 23:46:23.439348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.514 [2024-07-11 23:46:23.439528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.514 [2024-07-11 23:46:23.439556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.514 [2024-07-11 23:46:23.439572] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.514 [2024-07-11 23:46:23.439585] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.514 [2024-07-11 23:46:23.439615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.514 qpair failed and we were unable to recover it. 00:33:02.514 [2024-07-11 23:46:23.449372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.514 [2024-07-11 23:46:23.449535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.514 [2024-07-11 23:46:23.449563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.514 [2024-07-11 23:46:23.449579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.514 [2024-07-11 23:46:23.449591] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.515 [2024-07-11 23:46:23.449622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.515 qpair failed and we were unable to recover it. 00:33:02.515 [2024-07-11 23:46:23.459403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.515 [2024-07-11 23:46:23.459550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.515 [2024-07-11 23:46:23.459655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.515 [2024-07-11 23:46:23.459673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.515 [2024-07-11 23:46:23.459686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.515 [2024-07-11 23:46:23.459716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.515 qpair failed and we were unable to recover it. 00:33:02.774 [2024-07-11 23:46:23.469443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.774 [2024-07-11 23:46:23.469594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.774 [2024-07-11 23:46:23.469622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.774 [2024-07-11 23:46:23.469637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.774 [2024-07-11 23:46:23.469651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.774 [2024-07-11 23:46:23.469680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.774 qpair failed and we were unable to recover it. 00:33:02.774 [2024-07-11 23:46:23.479474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.774 [2024-07-11 23:46:23.479631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.774 [2024-07-11 23:46:23.479658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.774 [2024-07-11 23:46:23.479673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.774 [2024-07-11 23:46:23.479687] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.774 [2024-07-11 23:46:23.479717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.774 qpair failed and we were unable to recover it. 00:33:02.774 [2024-07-11 23:46:23.489507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.774 [2024-07-11 23:46:23.489671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.774 [2024-07-11 23:46:23.489698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.774 [2024-07-11 23:46:23.489713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.774 [2024-07-11 23:46:23.489726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.774 [2024-07-11 23:46:23.489756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.774 qpair failed and we were unable to recover it. 00:33:02.774 [2024-07-11 23:46:23.499636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.774 [2024-07-11 23:46:23.499787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.774 [2024-07-11 23:46:23.499814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.774 [2024-07-11 23:46:23.499830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.774 [2024-07-11 23:46:23.499843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.774 [2024-07-11 23:46:23.499873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.774 qpair failed and we were unable to recover it. 00:33:02.774 [2024-07-11 23:46:23.509588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.774 [2024-07-11 23:46:23.509757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.774 [2024-07-11 23:46:23.509785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.774 [2024-07-11 23:46:23.509800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.774 [2024-07-11 23:46:23.509813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.774 [2024-07-11 23:46:23.509843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.774 qpair failed and we were unable to recover it. 00:33:02.774 [2024-07-11 23:46:23.519606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.774 [2024-07-11 23:46:23.519781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.774 [2024-07-11 23:46:23.519809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.774 [2024-07-11 23:46:23.519824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.774 [2024-07-11 23:46:23.519837] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.774 [2024-07-11 23:46:23.519866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.774 qpair failed and we were unable to recover it. 00:33:02.774 [2024-07-11 23:46:23.529679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.774 [2024-07-11 23:46:23.529866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.774 [2024-07-11 23:46:23.529893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.774 [2024-07-11 23:46:23.529907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.774 [2024-07-11 23:46:23.529921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.774 [2024-07-11 23:46:23.529950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.774 qpair failed and we were unable to recover it. 00:33:02.774 [2024-07-11 23:46:23.539711] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.774 [2024-07-11 23:46:23.539860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.774 [2024-07-11 23:46:23.539887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.774 [2024-07-11 23:46:23.539902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.774 [2024-07-11 23:46:23.539915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.774 [2024-07-11 23:46:23.539945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.774 qpair failed and we were unable to recover it. 00:33:02.774 [2024-07-11 23:46:23.549702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.774 [2024-07-11 23:46:23.549989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.774 [2024-07-11 23:46:23.550027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.774 [2024-07-11 23:46:23.550043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.550057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.550087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.775 qpair failed and we were unable to recover it. 00:33:02.775 [2024-07-11 23:46:23.559713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.775 [2024-07-11 23:46:23.559863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.775 [2024-07-11 23:46:23.559891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.775 [2024-07-11 23:46:23.559906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.559919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.559949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.775 qpair failed and we were unable to recover it. 00:33:02.775 [2024-07-11 23:46:23.569750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.775 [2024-07-11 23:46:23.569930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.775 [2024-07-11 23:46:23.569958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.775 [2024-07-11 23:46:23.569973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.569986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.570017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.775 qpair failed and we were unable to recover it. 00:33:02.775 [2024-07-11 23:46:23.579791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.775 [2024-07-11 23:46:23.579940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.775 [2024-07-11 23:46:23.579967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.775 [2024-07-11 23:46:23.579982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.579995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.580025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.775 qpair failed and we were unable to recover it. 00:33:02.775 [2024-07-11 23:46:23.589808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.775 [2024-07-11 23:46:23.590043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.775 [2024-07-11 23:46:23.590071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.775 [2024-07-11 23:46:23.590086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.590099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.590136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.775 qpair failed and we were unable to recover it. 00:33:02.775 [2024-07-11 23:46:23.599859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.775 [2024-07-11 23:46:23.600050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.775 [2024-07-11 23:46:23.600077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.775 [2024-07-11 23:46:23.600092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.600106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.600135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.775 qpair failed and we were unable to recover it. 00:33:02.775 [2024-07-11 23:46:23.609855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.775 [2024-07-11 23:46:23.610006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.775 [2024-07-11 23:46:23.610033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.775 [2024-07-11 23:46:23.610048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.610060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.610090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.775 qpair failed and we were unable to recover it. 00:33:02.775 [2024-07-11 23:46:23.619866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.775 [2024-07-11 23:46:23.620034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.775 [2024-07-11 23:46:23.620062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.775 [2024-07-11 23:46:23.620077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.620090] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.620120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.775 qpair failed and we were unable to recover it. 00:33:02.775 [2024-07-11 23:46:23.629909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.775 [2024-07-11 23:46:23.630056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.775 [2024-07-11 23:46:23.630084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.775 [2024-07-11 23:46:23.630099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.630112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.630152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.775 qpair failed and we were unable to recover it. 00:33:02.775 [2024-07-11 23:46:23.640032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.775 [2024-07-11 23:46:23.640229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.775 [2024-07-11 23:46:23.640263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.775 [2024-07-11 23:46:23.640279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.640291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.640323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.775 qpair failed and we were unable to recover it. 00:33:02.775 [2024-07-11 23:46:23.650015] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.775 [2024-07-11 23:46:23.650305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.775 [2024-07-11 23:46:23.650346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.775 [2024-07-11 23:46:23.650362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.650375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.650407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.775 qpair failed and we were unable to recover it. 00:33:02.775 [2024-07-11 23:46:23.660026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.775 [2024-07-11 23:46:23.660190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.775 [2024-07-11 23:46:23.660221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.775 [2024-07-11 23:46:23.660236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.660249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.660280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.775 qpair failed and we were unable to recover it. 00:33:02.775 [2024-07-11 23:46:23.670050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.775 [2024-07-11 23:46:23.670201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.775 [2024-07-11 23:46:23.670228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.775 [2024-07-11 23:46:23.670252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.670265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.670296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.775 qpair failed and we were unable to recover it. 00:33:02.775 [2024-07-11 23:46:23.680111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.775 [2024-07-11 23:46:23.680427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.775 [2024-07-11 23:46:23.680455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.775 [2024-07-11 23:46:23.680470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.680484] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.680520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.775 qpair failed and we were unable to recover it. 00:33:02.775 [2024-07-11 23:46:23.690120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.775 [2024-07-11 23:46:23.690389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.775 [2024-07-11 23:46:23.690417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.775 [2024-07-11 23:46:23.690432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.775 [2024-07-11 23:46:23.690445] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.775 [2024-07-11 23:46:23.690477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.776 qpair failed and we were unable to recover it. 00:33:02.776 [2024-07-11 23:46:23.700154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.776 [2024-07-11 23:46:23.700345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.776 [2024-07-11 23:46:23.700373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.776 [2024-07-11 23:46:23.700388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.776 [2024-07-11 23:46:23.700401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.776 [2024-07-11 23:46:23.700432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.776 qpair failed and we were unable to recover it. 00:33:02.776 [2024-07-11 23:46:23.710207] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.776 [2024-07-11 23:46:23.710369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.776 [2024-07-11 23:46:23.710396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.776 [2024-07-11 23:46:23.710411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.776 [2024-07-11 23:46:23.710424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.776 [2024-07-11 23:46:23.710454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.776 qpair failed and we were unable to recover it. 00:33:02.776 [2024-07-11 23:46:23.720202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:02.776 [2024-07-11 23:46:23.720363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:02.776 [2024-07-11 23:46:23.720392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:02.776 [2024-07-11 23:46:23.720407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:02.776 [2024-07-11 23:46:23.720419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:02.776 [2024-07-11 23:46:23.720450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:02.776 qpair failed and we were unable to recover it. 00:33:03.035 [2024-07-11 23:46:23.730209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.035 [2024-07-11 23:46:23.730376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.035 [2024-07-11 23:46:23.730410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.035 [2024-07-11 23:46:23.730427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.035 [2024-07-11 23:46:23.730440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.035 [2024-07-11 23:46:23.730471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.035 qpair failed and we were unable to recover it. 00:33:03.035 [2024-07-11 23:46:23.740305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.035 [2024-07-11 23:46:23.740504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.035 [2024-07-11 23:46:23.740533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.035 [2024-07-11 23:46:23.740548] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.035 [2024-07-11 23:46:23.740561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.035 [2024-07-11 23:46:23.740592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.035 qpair failed and we were unable to recover it. 00:33:03.035 [2024-07-11 23:46:23.750316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.035 [2024-07-11 23:46:23.750463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.035 [2024-07-11 23:46:23.750490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.035 [2024-07-11 23:46:23.750505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.035 [2024-07-11 23:46:23.750518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.035 [2024-07-11 23:46:23.750549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.035 qpair failed and we were unable to recover it. 00:33:03.035 [2024-07-11 23:46:23.760316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.035 [2024-07-11 23:46:23.760688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.035 [2024-07-11 23:46:23.760717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.035 [2024-07-11 23:46:23.760732] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.035 [2024-07-11 23:46:23.760745] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.035 [2024-07-11 23:46:23.760776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.035 qpair failed and we were unable to recover it. 00:33:03.035 [2024-07-11 23:46:23.770318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.035 [2024-07-11 23:46:23.770515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.035 [2024-07-11 23:46:23.770543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.035 [2024-07-11 23:46:23.770559] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.035 [2024-07-11 23:46:23.770572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.035 [2024-07-11 23:46:23.770608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.035 qpair failed and we were unable to recover it. 00:33:03.035 [2024-07-11 23:46:23.780360] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.035 [2024-07-11 23:46:23.780510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.035 [2024-07-11 23:46:23.780539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.035 [2024-07-11 23:46:23.780554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.035 [2024-07-11 23:46:23.780567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.035 [2024-07-11 23:46:23.780598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.035 qpair failed and we were unable to recover it. 00:33:03.035 [2024-07-11 23:46:23.790450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.035 [2024-07-11 23:46:23.790599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.035 [2024-07-11 23:46:23.790627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.035 [2024-07-11 23:46:23.790643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.035 [2024-07-11 23:46:23.790656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.035 [2024-07-11 23:46:23.790687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.035 qpair failed and we were unable to recover it. 00:33:03.035 [2024-07-11 23:46:23.800455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.035 [2024-07-11 23:46:23.800662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.035 [2024-07-11 23:46:23.800691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.035 [2024-07-11 23:46:23.800706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.035 [2024-07-11 23:46:23.800719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.035 [2024-07-11 23:46:23.800750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.035 qpair failed and we were unable to recover it. 00:33:03.035 [2024-07-11 23:46:23.810431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.035 [2024-07-11 23:46:23.810603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.035 [2024-07-11 23:46:23.810631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.035 [2024-07-11 23:46:23.810646] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.035 [2024-07-11 23:46:23.810659] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.035 [2024-07-11 23:46:23.810690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.035 qpair failed and we were unable to recover it. 00:33:03.035 [2024-07-11 23:46:23.820485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.035 [2024-07-11 23:46:23.820642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.035 [2024-07-11 23:46:23.820685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.035 [2024-07-11 23:46:23.820701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.035 [2024-07-11 23:46:23.820715] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.035 [2024-07-11 23:46:23.820745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.035 qpair failed and we were unable to recover it. 00:33:03.035 [2024-07-11 23:46:23.830625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.035 [2024-07-11 23:46:23.830843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.035 [2024-07-11 23:46:23.830871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.035 [2024-07-11 23:46:23.830886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.035 [2024-07-11 23:46:23.830899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.035 [2024-07-11 23:46:23.830931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.035 qpair failed and we were unable to recover it. 00:33:03.035 [2024-07-11 23:46:23.840557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.035 [2024-07-11 23:46:23.840735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.035 [2024-07-11 23:46:23.840763] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.840778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.840792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.036 [2024-07-11 23:46:23.840822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.036 qpair failed and we were unable to recover it. 00:33:03.036 [2024-07-11 23:46:23.850606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.036 [2024-07-11 23:46:23.850823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.036 [2024-07-11 23:46:23.850852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.850867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.850880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.036 [2024-07-11 23:46:23.850911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.036 qpair failed and we were unable to recover it. 00:33:03.036 [2024-07-11 23:46:23.860572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.036 [2024-07-11 23:46:23.860765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.036 [2024-07-11 23:46:23.860793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.860809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.860822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.036 [2024-07-11 23:46:23.860858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.036 qpair failed and we were unable to recover it. 00:33:03.036 [2024-07-11 23:46:23.870592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.036 [2024-07-11 23:46:23.870761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.036 [2024-07-11 23:46:23.870789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.870804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.870818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.036 [2024-07-11 23:46:23.870856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.036 qpair failed and we were unable to recover it. 00:33:03.036 [2024-07-11 23:46:23.880670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.036 [2024-07-11 23:46:23.880822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.036 [2024-07-11 23:46:23.880851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.880866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.880881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.036 [2024-07-11 23:46:23.880911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.036 qpair failed and we were unable to recover it. 00:33:03.036 [2024-07-11 23:46:23.890681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.036 [2024-07-11 23:46:23.890859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.036 [2024-07-11 23:46:23.890888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.890903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.890917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.036 [2024-07-11 23:46:23.890954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.036 qpair failed and we were unable to recover it. 00:33:03.036 [2024-07-11 23:46:23.900679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.036 [2024-07-11 23:46:23.900831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.036 [2024-07-11 23:46:23.900860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.900875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.900888] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.036 [2024-07-11 23:46:23.900919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.036 qpair failed and we were unable to recover it. 00:33:03.036 [2024-07-11 23:46:23.910785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.036 [2024-07-11 23:46:23.911078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.036 [2024-07-11 23:46:23.911113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.911130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.911159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.036 [2024-07-11 23:46:23.911200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.036 qpair failed and we were unable to recover it. 00:33:03.036 [2024-07-11 23:46:23.920754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.036 [2024-07-11 23:46:23.920929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.036 [2024-07-11 23:46:23.920958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.920974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.920987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.036 [2024-07-11 23:46:23.921017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.036 qpair failed and we were unable to recover it. 00:33:03.036 [2024-07-11 23:46:23.930790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.036 [2024-07-11 23:46:23.930950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.036 [2024-07-11 23:46:23.930978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.930993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.931007] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.036 [2024-07-11 23:46:23.931037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.036 qpair failed and we were unable to recover it. 00:33:03.036 [2024-07-11 23:46:23.940804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.036 [2024-07-11 23:46:23.940995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.036 [2024-07-11 23:46:23.941023] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.941039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.941052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.036 [2024-07-11 23:46:23.941082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.036 qpair failed and we were unable to recover it. 00:33:03.036 [2024-07-11 23:46:23.950797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.036 [2024-07-11 23:46:23.950948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.036 [2024-07-11 23:46:23.950977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.950992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.951011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.036 [2024-07-11 23:46:23.951043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.036 qpair failed and we were unable to recover it. 00:33:03.036 [2024-07-11 23:46:23.960865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.036 [2024-07-11 23:46:23.961023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.036 [2024-07-11 23:46:23.961051] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.961067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.961080] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.036 [2024-07-11 23:46:23.961110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.036 qpair failed and we were unable to recover it. 00:33:03.036 [2024-07-11 23:46:23.970891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.036 [2024-07-11 23:46:23.971041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.036 [2024-07-11 23:46:23.971069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.971085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.971098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.036 [2024-07-11 23:46:23.971129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.036 qpair failed and we were unable to recover it. 00:33:03.036 [2024-07-11 23:46:23.980913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.036 [2024-07-11 23:46:23.981064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.036 [2024-07-11 23:46:23.981092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.036 [2024-07-11 23:46:23.981107] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.036 [2024-07-11 23:46:23.981120] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.037 [2024-07-11 23:46:23.981160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.037 qpair failed and we were unable to recover it. 00:33:03.296 [2024-07-11 23:46:23.990947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.296 [2024-07-11 23:46:23.991121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.296 [2024-07-11 23:46:23.991165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.296 [2024-07-11 23:46:23.991186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.296 [2024-07-11 23:46:23.991200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.296 [2024-07-11 23:46:23.991231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.296 qpair failed and we were unable to recover it. 00:33:03.296 [2024-07-11 23:46:24.000997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.296 [2024-07-11 23:46:24.001207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.296 [2024-07-11 23:46:24.001236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.296 [2024-07-11 23:46:24.001252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.296 [2024-07-11 23:46:24.001264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.296 [2024-07-11 23:46:24.001295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.296 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.010990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.011164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.297 [2024-07-11 23:46:24.011194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.297 [2024-07-11 23:46:24.011210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.297 [2024-07-11 23:46:24.011223] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.297 [2024-07-11 23:46:24.011254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.297 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.021012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.021166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.297 [2024-07-11 23:46:24.021194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.297 [2024-07-11 23:46:24.021210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.297 [2024-07-11 23:46:24.021223] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.297 [2024-07-11 23:46:24.021254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.297 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.031041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.031208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.297 [2024-07-11 23:46:24.031236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.297 [2024-07-11 23:46:24.031252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.297 [2024-07-11 23:46:24.031265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.297 [2024-07-11 23:46:24.031296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.297 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.041108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.041271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.297 [2024-07-11 23:46:24.041300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.297 [2024-07-11 23:46:24.041315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.297 [2024-07-11 23:46:24.041334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.297 [2024-07-11 23:46:24.041366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.297 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.051090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.051307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.297 [2024-07-11 23:46:24.051336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.297 [2024-07-11 23:46:24.051351] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.297 [2024-07-11 23:46:24.051364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.297 [2024-07-11 23:46:24.051395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.297 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.061178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.061362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.297 [2024-07-11 23:46:24.061390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.297 [2024-07-11 23:46:24.061405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.297 [2024-07-11 23:46:24.061419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.297 [2024-07-11 23:46:24.061449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.297 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.071183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.071331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.297 [2024-07-11 23:46:24.071359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.297 [2024-07-11 23:46:24.071374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.297 [2024-07-11 23:46:24.071388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.297 [2024-07-11 23:46:24.071419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.297 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.081254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.081414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.297 [2024-07-11 23:46:24.081443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.297 [2024-07-11 23:46:24.081459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.297 [2024-07-11 23:46:24.081472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.297 [2024-07-11 23:46:24.081502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.297 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.091303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.091459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.297 [2024-07-11 23:46:24.091487] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.297 [2024-07-11 23:46:24.091502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.297 [2024-07-11 23:46:24.091515] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.297 [2024-07-11 23:46:24.091546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.297 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.101246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.101394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.297 [2024-07-11 23:46:24.101423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.297 [2024-07-11 23:46:24.101438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.297 [2024-07-11 23:46:24.101451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.297 [2024-07-11 23:46:24.101482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.297 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.111295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.111445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.297 [2024-07-11 23:46:24.111473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.297 [2024-07-11 23:46:24.111489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.297 [2024-07-11 23:46:24.111502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.297 [2024-07-11 23:46:24.111532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.297 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.121327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.121478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.297 [2024-07-11 23:46:24.121506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.297 [2024-07-11 23:46:24.121521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.297 [2024-07-11 23:46:24.121534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.297 [2024-07-11 23:46:24.121564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.297 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.131342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.131497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.297 [2024-07-11 23:46:24.131525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.297 [2024-07-11 23:46:24.131541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.297 [2024-07-11 23:46:24.131559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.297 [2024-07-11 23:46:24.131591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.297 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.141405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.141570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.297 [2024-07-11 23:46:24.141598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.297 [2024-07-11 23:46:24.141613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.297 [2024-07-11 23:46:24.141625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.297 [2024-07-11 23:46:24.141655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.297 qpair failed and we were unable to recover it. 00:33:03.297 [2024-07-11 23:46:24.151392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.297 [2024-07-11 23:46:24.151581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.298 [2024-07-11 23:46:24.151610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.298 [2024-07-11 23:46:24.151626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.298 [2024-07-11 23:46:24.151639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.298 [2024-07-11 23:46:24.151671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.298 qpair failed and we were unable to recover it. 00:33:03.298 [2024-07-11 23:46:24.161432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.298 [2024-07-11 23:46:24.161585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.298 [2024-07-11 23:46:24.161614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.298 [2024-07-11 23:46:24.161629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.298 [2024-07-11 23:46:24.161643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.298 [2024-07-11 23:46:24.161673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.298 qpair failed and we were unable to recover it. 00:33:03.298 [2024-07-11 23:46:24.171446] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.298 [2024-07-11 23:46:24.171606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.298 [2024-07-11 23:46:24.171634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.298 [2024-07-11 23:46:24.171650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.298 [2024-07-11 23:46:24.171663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.298 [2024-07-11 23:46:24.171694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.298 qpair failed and we were unable to recover it. 00:33:03.298 [2024-07-11 23:46:24.181514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.298 [2024-07-11 23:46:24.181673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.298 [2024-07-11 23:46:24.181702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.298 [2024-07-11 23:46:24.181717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.298 [2024-07-11 23:46:24.181730] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.298 [2024-07-11 23:46:24.181761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.298 qpair failed and we were unable to recover it. 00:33:03.298 [2024-07-11 23:46:24.191505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.298 [2024-07-11 23:46:24.191655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.298 [2024-07-11 23:46:24.191683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.298 [2024-07-11 23:46:24.191699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.298 [2024-07-11 23:46:24.191712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.298 [2024-07-11 23:46:24.191743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.298 qpair failed and we were unable to recover it. 00:33:03.298 [2024-07-11 23:46:24.201532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.298 [2024-07-11 23:46:24.201688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.298 [2024-07-11 23:46:24.201717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.298 [2024-07-11 23:46:24.201733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.298 [2024-07-11 23:46:24.201746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.298 [2024-07-11 23:46:24.201777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.298 qpair failed and we were unable to recover it. 00:33:03.298 [2024-07-11 23:46:24.211566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.298 [2024-07-11 23:46:24.211722] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.298 [2024-07-11 23:46:24.211751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.298 [2024-07-11 23:46:24.211766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.298 [2024-07-11 23:46:24.211779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.298 [2024-07-11 23:46:24.211810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.298 qpair failed and we were unable to recover it. 00:33:03.298 [2024-07-11 23:46:24.221615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.298 [2024-07-11 23:46:24.221761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.298 [2024-07-11 23:46:24.221790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.298 [2024-07-11 23:46:24.221806] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.298 [2024-07-11 23:46:24.221825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.298 [2024-07-11 23:46:24.221856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.298 qpair failed and we were unable to recover it. 00:33:03.298 [2024-07-11 23:46:24.231626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.298 [2024-07-11 23:46:24.231774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.298 [2024-07-11 23:46:24.231803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.298 [2024-07-11 23:46:24.231818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.298 [2024-07-11 23:46:24.231831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.298 [2024-07-11 23:46:24.231862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.298 qpair failed and we were unable to recover it. 00:33:03.298 [2024-07-11 23:46:24.241664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.298 [2024-07-11 23:46:24.241819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.298 [2024-07-11 23:46:24.241848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.298 [2024-07-11 23:46:24.241864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.298 [2024-07-11 23:46:24.241877] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.298 [2024-07-11 23:46:24.241908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.298 qpair failed and we were unable to recover it. 00:33:03.558 [2024-07-11 23:46:24.251691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.558 [2024-07-11 23:46:24.251843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.558 [2024-07-11 23:46:24.251872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.558 [2024-07-11 23:46:24.251888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.558 [2024-07-11 23:46:24.251901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.558 [2024-07-11 23:46:24.251931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.558 qpair failed and we were unable to recover it. 00:33:03.558 [2024-07-11 23:46:24.261731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.558 [2024-07-11 23:46:24.261912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.558 [2024-07-11 23:46:24.261941] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.558 [2024-07-11 23:46:24.261956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.558 [2024-07-11 23:46:24.261969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.558 [2024-07-11 23:46:24.261999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.558 qpair failed and we were unable to recover it. 00:33:03.558 [2024-07-11 23:46:24.271771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.558 [2024-07-11 23:46:24.271964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.558 [2024-07-11 23:46:24.271993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.558 [2024-07-11 23:46:24.272008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.558 [2024-07-11 23:46:24.272021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.558 [2024-07-11 23:46:24.272052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.558 qpair failed and we were unable to recover it. 00:33:03.558 [2024-07-11 23:46:24.281824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.558 [2024-07-11 23:46:24.281976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.558 [2024-07-11 23:46:24.282005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.558 [2024-07-11 23:46:24.282021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.558 [2024-07-11 23:46:24.282034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.558 [2024-07-11 23:46:24.282064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.558 qpair failed and we were unable to recover it. 00:33:03.558 [2024-07-11 23:46:24.291804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.558 [2024-07-11 23:46:24.292012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.558 [2024-07-11 23:46:24.292041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.558 [2024-07-11 23:46:24.292057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.558 [2024-07-11 23:46:24.292070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.558 [2024-07-11 23:46:24.292100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.558 qpair failed and we were unable to recover it. 00:33:03.558 [2024-07-11 23:46:24.301857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.558 [2024-07-11 23:46:24.302002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.558 [2024-07-11 23:46:24.302031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.558 [2024-07-11 23:46:24.302047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.558 [2024-07-11 23:46:24.302060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.558 [2024-07-11 23:46:24.302091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.558 qpair failed and we were unable to recover it. 00:33:03.558 [2024-07-11 23:46:24.311939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.558 [2024-07-11 23:46:24.312087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.558 [2024-07-11 23:46:24.312116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.558 [2024-07-11 23:46:24.312147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.558 [2024-07-11 23:46:24.312164] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.558 [2024-07-11 23:46:24.312195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.558 qpair failed and we were unable to recover it. 00:33:03.558 [2024-07-11 23:46:24.321900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.558 [2024-07-11 23:46:24.322077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.558 [2024-07-11 23:46:24.322106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.558 [2024-07-11 23:46:24.322123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.558 [2024-07-11 23:46:24.322136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.558 [2024-07-11 23:46:24.322178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.558 qpair failed and we were unable to recover it. 00:33:03.558 [2024-07-11 23:46:24.331935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.558 [2024-07-11 23:46:24.332090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.558 [2024-07-11 23:46:24.332119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.558 [2024-07-11 23:46:24.332135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.558 [2024-07-11 23:46:24.332159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.558 [2024-07-11 23:46:24.332192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.558 qpair failed and we were unable to recover it. 00:33:03.558 [2024-07-11 23:46:24.341951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.558 [2024-07-11 23:46:24.342107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.558 [2024-07-11 23:46:24.342146] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.558 [2024-07-11 23:46:24.342166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.558 [2024-07-11 23:46:24.342180] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.558 [2024-07-11 23:46:24.342211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.558 qpair failed and we were unable to recover it. 00:33:03.558 [2024-07-11 23:46:24.351986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.558 [2024-07-11 23:46:24.352146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.558 [2024-07-11 23:46:24.352176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.558 [2024-07-11 23:46:24.352193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.558 [2024-07-11 23:46:24.352206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.558 [2024-07-11 23:46:24.352239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.558 qpair failed and we were unable to recover it. 00:33:03.558 [2024-07-11 23:46:24.362025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.558 [2024-07-11 23:46:24.362186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.558 [2024-07-11 23:46:24.362215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.558 [2024-07-11 23:46:24.362231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.558 [2024-07-11 23:46:24.362245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.558 [2024-07-11 23:46:24.362276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.558 qpair failed and we were unable to recover it. 00:33:03.558 [2024-07-11 23:46:24.372055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.558 [2024-07-11 23:46:24.372213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.558 [2024-07-11 23:46:24.372243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.558 [2024-07-11 23:46:24.372259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.558 [2024-07-11 23:46:24.372273] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.558 [2024-07-11 23:46:24.372304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.558 qpair failed and we were unable to recover it. 00:33:03.558 [2024-07-11 23:46:24.382078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.558 [2024-07-11 23:46:24.382239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.558 [2024-07-11 23:46:24.382269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.558 [2024-07-11 23:46:24.382285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.559 [2024-07-11 23:46:24.382298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.559 [2024-07-11 23:46:24.382329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.559 qpair failed and we were unable to recover it. 00:33:03.559 [2024-07-11 23:46:24.392089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.559 [2024-07-11 23:46:24.392245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.559 [2024-07-11 23:46:24.392275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.559 [2024-07-11 23:46:24.392290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.559 [2024-07-11 23:46:24.392304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.559 [2024-07-11 23:46:24.392335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.559 qpair failed and we were unable to recover it. 00:33:03.559 [2024-07-11 23:46:24.402158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.559 [2024-07-11 23:46:24.402322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.559 [2024-07-11 23:46:24.402351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.559 [2024-07-11 23:46:24.402373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.559 [2024-07-11 23:46:24.402389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.559 [2024-07-11 23:46:24.402420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.559 qpair failed and we were unable to recover it. 00:33:03.559 [2024-07-11 23:46:24.412165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.559 [2024-07-11 23:46:24.412320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.559 [2024-07-11 23:46:24.412350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.559 [2024-07-11 23:46:24.412366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.559 [2024-07-11 23:46:24.412379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.559 [2024-07-11 23:46:24.412410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.559 qpair failed and we were unable to recover it. 00:33:03.559 [2024-07-11 23:46:24.422229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.559 [2024-07-11 23:46:24.422383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.559 [2024-07-11 23:46:24.422412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.559 [2024-07-11 23:46:24.422429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.559 [2024-07-11 23:46:24.422443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.559 [2024-07-11 23:46:24.422473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.559 qpair failed and we were unable to recover it. 00:33:03.559 [2024-07-11 23:46:24.432237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.559 [2024-07-11 23:46:24.432386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.559 [2024-07-11 23:46:24.432414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.559 [2024-07-11 23:46:24.432430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.559 [2024-07-11 23:46:24.432443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.559 [2024-07-11 23:46:24.432475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.559 qpair failed and we were unable to recover it. 00:33:03.559 [2024-07-11 23:46:24.442250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.559 [2024-07-11 23:46:24.442402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.559 [2024-07-11 23:46:24.442432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.559 [2024-07-11 23:46:24.442448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.559 [2024-07-11 23:46:24.442461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.559 [2024-07-11 23:46:24.442492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.559 qpair failed and we were unable to recover it. 00:33:03.559 [2024-07-11 23:46:24.452281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.559 [2024-07-11 23:46:24.452438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.559 [2024-07-11 23:46:24.452468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.559 [2024-07-11 23:46:24.452484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.559 [2024-07-11 23:46:24.452498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.559 [2024-07-11 23:46:24.452528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.559 qpair failed and we were unable to recover it. 00:33:03.559 [2024-07-11 23:46:24.462312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.559 [2024-07-11 23:46:24.462469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.559 [2024-07-11 23:46:24.462499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.559 [2024-07-11 23:46:24.462515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.559 [2024-07-11 23:46:24.462529] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.559 [2024-07-11 23:46:24.462560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.559 qpair failed and we were unable to recover it. 00:33:03.559 [2024-07-11 23:46:24.472341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.559 [2024-07-11 23:46:24.472489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.559 [2024-07-11 23:46:24.472518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.559 [2024-07-11 23:46:24.472534] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.559 [2024-07-11 23:46:24.472547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.559 [2024-07-11 23:46:24.472578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.559 qpair failed and we were unable to recover it. 00:33:03.559 [2024-07-11 23:46:24.482402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.559 [2024-07-11 23:46:24.482559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.559 [2024-07-11 23:46:24.482589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.559 [2024-07-11 23:46:24.482606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.559 [2024-07-11 23:46:24.482619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.559 [2024-07-11 23:46:24.482650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.559 qpair failed and we were unable to recover it. 00:33:03.559 [2024-07-11 23:46:24.492404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.559 [2024-07-11 23:46:24.492607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.559 [2024-07-11 23:46:24.492635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.559 [2024-07-11 23:46:24.492658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.559 [2024-07-11 23:46:24.492673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.559 [2024-07-11 23:46:24.492704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.559 qpair failed and we were unable to recover it. 00:33:03.559 [2024-07-11 23:46:24.502467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.559 [2024-07-11 23:46:24.502656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.559 [2024-07-11 23:46:24.502686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.559 [2024-07-11 23:46:24.502702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.559 [2024-07-11 23:46:24.502716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.559 [2024-07-11 23:46:24.502747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.559 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-11 23:46:24.512456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.819 [2024-07-11 23:46:24.512604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.819 [2024-07-11 23:46:24.512633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.819 [2024-07-11 23:46:24.512650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.819 [2024-07-11 23:46:24.512664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.819 [2024-07-11 23:46:24.512695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-11 23:46:24.522511] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.819 [2024-07-11 23:46:24.522664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.819 [2024-07-11 23:46:24.522694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.819 [2024-07-11 23:46:24.522711] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.819 [2024-07-11 23:46:24.522724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.819 [2024-07-11 23:46:24.522755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-11 23:46:24.532518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.819 [2024-07-11 23:46:24.532668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.819 [2024-07-11 23:46:24.532697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.819 [2024-07-11 23:46:24.532712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.819 [2024-07-11 23:46:24.532726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.819 [2024-07-11 23:46:24.532757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-11 23:46:24.542559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.819 [2024-07-11 23:46:24.542744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.819 [2024-07-11 23:46:24.542773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.819 [2024-07-11 23:46:24.542789] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.819 [2024-07-11 23:46:24.542803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.819 [2024-07-11 23:46:24.542834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.819 qpair failed and we were unable to recover it. 00:33:03.819 [2024-07-11 23:46:24.552610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.819 [2024-07-11 23:46:24.552799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.819 [2024-07-11 23:46:24.552828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.552844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.552858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.552889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-11 23:46:24.562653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.820 [2024-07-11 23:46:24.562838] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.820 [2024-07-11 23:46:24.562868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.562884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.562898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.562929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-11 23:46:24.572622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.820 [2024-07-11 23:46:24.572788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.820 [2024-07-11 23:46:24.572817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.572833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.572847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.572879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-11 23:46:24.582687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.820 [2024-07-11 23:46:24.582835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.820 [2024-07-11 23:46:24.582865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.582888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.582902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.582933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-11 23:46:24.592718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.820 [2024-07-11 23:46:24.592865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.820 [2024-07-11 23:46:24.592894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.592910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.592924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.592955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-11 23:46:24.602739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.820 [2024-07-11 23:46:24.602892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.820 [2024-07-11 23:46:24.602922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.602937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.602951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.602982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-11 23:46:24.612765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.820 [2024-07-11 23:46:24.612921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.820 [2024-07-11 23:46:24.612951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.612968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.612981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.613012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-11 23:46:24.622793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.820 [2024-07-11 23:46:24.622944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.820 [2024-07-11 23:46:24.622974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.622990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.623003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.623035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-11 23:46:24.632807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.820 [2024-07-11 23:46:24.632961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.820 [2024-07-11 23:46:24.632991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.633007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.633021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.633052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-11 23:46:24.642847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.820 [2024-07-11 23:46:24.643006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.820 [2024-07-11 23:46:24.643036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.643052] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.643066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.643097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-11 23:46:24.652870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.820 [2024-07-11 23:46:24.653024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.820 [2024-07-11 23:46:24.653053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.653070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.653084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.653115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-11 23:46:24.663026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.820 [2024-07-11 23:46:24.663214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.820 [2024-07-11 23:46:24.663244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.663260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.663273] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.663305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-11 23:46:24.672928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.820 [2024-07-11 23:46:24.673092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.820 [2024-07-11 23:46:24.673127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.673155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.673170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.673202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-11 23:46:24.682974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.820 [2024-07-11 23:46:24.683130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.820 [2024-07-11 23:46:24.683167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.683185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.683199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.683231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.820 [2024-07-11 23:46:24.692983] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.820 [2024-07-11 23:46:24.693131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.820 [2024-07-11 23:46:24.693171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.820 [2024-07-11 23:46:24.693188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.820 [2024-07-11 23:46:24.693202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.820 [2024-07-11 23:46:24.693232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.820 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-11 23:46:24.703033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.821 [2024-07-11 23:46:24.703196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.821 [2024-07-11 23:46:24.703226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.821 [2024-07-11 23:46:24.703242] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.821 [2024-07-11 23:46:24.703256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.821 [2024-07-11 23:46:24.703286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-11 23:46:24.713057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.821 [2024-07-11 23:46:24.713235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.821 [2024-07-11 23:46:24.713266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.821 [2024-07-11 23:46:24.713282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.821 [2024-07-11 23:46:24.713295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.821 [2024-07-11 23:46:24.713325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-11 23:46:24.723105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.821 [2024-07-11 23:46:24.723267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.821 [2024-07-11 23:46:24.723297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.821 [2024-07-11 23:46:24.723313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.821 [2024-07-11 23:46:24.723328] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.821 [2024-07-11 23:46:24.723358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-11 23:46:24.733209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.821 [2024-07-11 23:46:24.733359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.821 [2024-07-11 23:46:24.733389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.821 [2024-07-11 23:46:24.733405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.821 [2024-07-11 23:46:24.733419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.821 [2024-07-11 23:46:24.733450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-11 23:46:24.743191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.821 [2024-07-11 23:46:24.743345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.821 [2024-07-11 23:46:24.743375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.821 [2024-07-11 23:46:24.743391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.821 [2024-07-11 23:46:24.743405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.821 [2024-07-11 23:46:24.743436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-11 23:46:24.753181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.821 [2024-07-11 23:46:24.753369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.821 [2024-07-11 23:46:24.753396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.821 [2024-07-11 23:46:24.753412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.821 [2024-07-11 23:46:24.753425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.821 [2024-07-11 23:46:24.753456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.821 qpair failed and we were unable to recover it. 00:33:03.821 [2024-07-11 23:46:24.763305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.821 [2024-07-11 23:46:24.763514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.821 [2024-07-11 23:46:24.763549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.821 [2024-07-11 23:46:24.763566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.821 [2024-07-11 23:46:24.763580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:03.821 [2024-07-11 23:46:24.763612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:03.821 qpair failed and we were unable to recover it. 00:33:04.082 [2024-07-11 23:46:24.773267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.082 [2024-07-11 23:46:24.773455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.082 [2024-07-11 23:46:24.773485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.773500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.083 [2024-07-11 23:46:24.773514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:04.083 [2024-07-11 23:46:24.773545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:04.083 qpair failed and we were unable to recover it. 00:33:04.083 [2024-07-11 23:46:24.783250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.083 [2024-07-11 23:46:24.783410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.083 [2024-07-11 23:46:24.783439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.783455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.083 [2024-07-11 23:46:24.783468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:04.083 [2024-07-11 23:46:24.783500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:04.083 qpair failed and we were unable to recover it. 00:33:04.083 [2024-07-11 23:46:24.793285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.083 [2024-07-11 23:46:24.793456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.083 [2024-07-11 23:46:24.793485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.793501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.083 [2024-07-11 23:46:24.793515] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:04.083 [2024-07-11 23:46:24.793546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:04.083 qpair failed and we were unable to recover it. 00:33:04.083 [2024-07-11 23:46:24.803386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.083 [2024-07-11 23:46:24.803552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.083 [2024-07-11 23:46:24.803581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.803597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.083 [2024-07-11 23:46:24.803611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:04.083 [2024-07-11 23:46:24.803648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:04.083 qpair failed and we were unable to recover it. 00:33:04.083 [2024-07-11 23:46:24.813363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.083 [2024-07-11 23:46:24.813517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.083 [2024-07-11 23:46:24.813546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.813563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.083 [2024-07-11 23:46:24.813576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:04.083 [2024-07-11 23:46:24.813607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:04.083 qpair failed and we were unable to recover it. 00:33:04.083 [2024-07-11 23:46:24.823392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.083 [2024-07-11 23:46:24.823547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.083 [2024-07-11 23:46:24.823576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.823592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.083 [2024-07-11 23:46:24.823605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:04.083 [2024-07-11 23:46:24.823636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:04.083 qpair failed and we were unable to recover it. 00:33:04.083 [2024-07-11 23:46:24.833414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.083 [2024-07-11 23:46:24.833606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.083 [2024-07-11 23:46:24.833634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.833650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.083 [2024-07-11 23:46:24.833663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:04.083 [2024-07-11 23:46:24.833695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:04.083 qpair failed and we were unable to recover it. 00:33:04.083 [2024-07-11 23:46:24.843471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.083 [2024-07-11 23:46:24.843636] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.083 [2024-07-11 23:46:24.843666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.843682] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.083 [2024-07-11 23:46:24.843695] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:04.083 [2024-07-11 23:46:24.843726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:04.083 qpair failed and we were unable to recover it. 00:33:04.083 [2024-07-11 23:46:24.853493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.083 [2024-07-11 23:46:24.853673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.083 [2024-07-11 23:46:24.853708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.853725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.083 [2024-07-11 23:46:24.853739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b5ff50 00:33:04.083 [2024-07-11 23:46:24.853770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:04.083 qpair failed and we were unable to recover it. 00:33:04.083 [2024-07-11 23:46:24.854035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6d9f0 is same with the state(5) to be set 00:33:04.083 [2024-07-11 23:46:24.863633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.083 [2024-07-11 23:46:24.863811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.083 [2024-07-11 23:46:24.863849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.863867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.083 [2024-07-11 23:46:24.863881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f00d8000b90 00:33:04.083 [2024-07-11 23:46:24.863915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.083 qpair failed and we were unable to recover it. 00:33:04.083 [2024-07-11 23:46:24.873556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.083 [2024-07-11 23:46:24.873712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.083 [2024-07-11 23:46:24.873743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.873759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.083 [2024-07-11 23:46:24.873773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f00d8000b90 00:33:04.083 [2024-07-11 23:46:24.873806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.083 qpair failed and we were unable to recover it. 00:33:04.083 [2024-07-11 23:46:24.883625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.083 [2024-07-11 23:46:24.883784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.083 [2024-07-11 23:46:24.883821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.883839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.083 [2024-07-11 23:46:24.883853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f00c8000b90 00:33:04.083 [2024-07-11 23:46:24.883888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.083 qpair failed and we were unable to recover it. 00:33:04.083 [2024-07-11 23:46:24.893608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.083 [2024-07-11 23:46:24.893786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.083 [2024-07-11 23:46:24.893817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.893833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.083 [2024-07-11 23:46:24.893853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f00c8000b90 00:33:04.083 [2024-07-11 23:46:24.893887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:04.083 qpair failed and we were unable to recover it. 00:33:04.083 [2024-07-11 23:46:24.903649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.083 [2024-07-11 23:46:24.903801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.083 [2024-07-11 23:46:24.903835] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.903852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.083 [2024-07-11 23:46:24.903866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f00d0000b90 00:33:04.083 [2024-07-11 23:46:24.903901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:04.083 qpair failed and we were unable to recover it. 00:33:04.083 [2024-07-11 23:46:24.913688] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.083 [2024-07-11 23:46:24.913839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.083 [2024-07-11 23:46:24.913868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.083 [2024-07-11 23:46:24.913884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.084 [2024-07-11 23:46:24.913898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f00d0000b90 00:33:04.084 [2024-07-11 23:46:24.913932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:04.084 qpair failed and we were unable to recover it. 00:33:04.084 [2024-07-11 23:46:24.914229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6d9f0 (9): Bad file descriptor 00:33:04.084 Initializing NVMe Controllers 00:33:04.084 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:04.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:04.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:04.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:04.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:04.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:04.084 Initialization complete. Launching workers. 00:33:04.084 Starting thread on core 1 00:33:04.084 Starting thread on core 2 00:33:04.084 Starting thread on core 3 00:33:04.084 Starting thread on core 0 00:33:04.084 23:46:24 -- host/target_disconnect.sh@59 -- # sync 00:33:04.084 00:33:04.084 real 0m11.451s 00:33:04.084 user 0m21.334s 00:33:04.084 sys 0m5.649s 00:33:04.084 23:46:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:04.084 23:46:24 -- common/autotest_common.sh@10 -- # set +x 00:33:04.084 ************************************ 00:33:04.084 END TEST nvmf_target_disconnect_tc2 00:33:04.084 ************************************ 00:33:04.084 23:46:24 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:33:04.084 23:46:24 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:04.084 23:46:24 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:33:04.084 23:46:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:04.084 23:46:24 -- nvmf/common.sh@116 -- # sync 00:33:04.084 23:46:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:04.084 23:46:24 -- nvmf/common.sh@119 -- # set +e 00:33:04.084 23:46:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:04.084 23:46:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:04.084 rmmod nvme_tcp 00:33:04.084 rmmod nvme_fabrics 00:33:04.084 rmmod nvme_keyring 00:33:04.084 23:46:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:04.084 23:46:25 -- nvmf/common.sh@123 -- # set -e 00:33:04.084 23:46:25 -- nvmf/common.sh@124 -- # return 0 00:33:04.084 23:46:25 -- nvmf/common.sh@477 -- # '[' -n 391888 ']' 00:33:04.084 23:46:25 -- nvmf/common.sh@478 -- # killprocess 391888 00:33:04.084 23:46:25 -- common/autotest_common.sh@926 -- # '[' -z 391888 ']' 00:33:04.084 23:46:25 -- common/autotest_common.sh@930 -- # kill -0 391888 00:33:04.084 23:46:25 -- common/autotest_common.sh@931 -- # uname 00:33:04.084 23:46:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:04.084 23:46:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 391888 00:33:04.343 23:46:25 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:33:04.343 23:46:25 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:33:04.343 23:46:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 391888' 00:33:04.343 killing process with pid 391888 00:33:04.343 23:46:25 -- common/autotest_common.sh@945 -- # kill 391888 00:33:04.343 23:46:25 -- common/autotest_common.sh@950 -- # wait 391888 00:33:04.603 23:46:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:04.603 23:46:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:04.603 23:46:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:04.603 23:46:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:04.603 23:46:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:04.603 23:46:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.603 23:46:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:04.603 23:46:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.154 23:46:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:07.154 00:33:07.154 real 0m16.951s 00:33:07.154 user 0m47.485s 00:33:07.154 sys 0m8.192s 00:33:07.154 23:46:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:07.154 23:46:27 -- common/autotest_common.sh@10 -- # set +x 00:33:07.154 ************************************ 00:33:07.154 END TEST nvmf_target_disconnect 00:33:07.154 ************************************ 00:33:07.154 23:46:27 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:33:07.154 23:46:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:07.154 23:46:27 -- common/autotest_common.sh@10 -- # set +x 00:33:07.154 23:46:27 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:33:07.154 00:33:07.154 real 23m49.668s 00:33:07.154 user 68m32.669s 00:33:07.154 sys 6m14.963s 00:33:07.154 23:46:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:07.154 23:46:27 -- common/autotest_common.sh@10 -- # set +x 00:33:07.154 ************************************ 00:33:07.154 END TEST nvmf_tcp 00:33:07.154 ************************************ 00:33:07.154 23:46:27 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:33:07.154 23:46:27 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:07.154 23:46:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:07.154 23:46:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:07.154 23:46:27 -- common/autotest_common.sh@10 -- # set +x 00:33:07.154 ************************************ 00:33:07.155 START TEST spdkcli_nvmf_tcp 00:33:07.155 ************************************ 00:33:07.155 23:46:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:07.155 * Looking for test storage... 00:33:07.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:07.155 23:46:27 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:07.155 23:46:27 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:07.155 23:46:27 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:07.155 23:46:27 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.155 23:46:27 -- nvmf/common.sh@7 -- # uname -s 00:33:07.155 23:46:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.155 23:46:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.155 23:46:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.155 23:46:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.155 23:46:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.155 23:46:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.155 23:46:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.155 23:46:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.155 23:46:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.155 23:46:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.155 23:46:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:07.155 23:46:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:07.155 23:46:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.155 23:46:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.155 23:46:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.155 23:46:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.155 23:46:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.155 23:46:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.155 23:46:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.155 23:46:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.155 23:46:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.155 23:46:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.155 23:46:27 -- paths/export.sh@5 -- # export PATH 00:33:07.155 23:46:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.155 23:46:27 -- nvmf/common.sh@46 -- # : 0 00:33:07.155 23:46:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:07.155 23:46:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:07.155 23:46:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:07.155 23:46:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.155 23:46:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.155 23:46:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:07.155 23:46:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:07.155 23:46:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:07.155 23:46:27 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:07.155 23:46:27 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:07.155 23:46:27 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:07.155 23:46:27 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:07.155 23:46:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:07.155 23:46:27 -- common/autotest_common.sh@10 -- # set +x 00:33:07.155 23:46:27 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:07.155 23:46:27 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=393006 00:33:07.155 23:46:27 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:07.155 23:46:27 -- spdkcli/common.sh@34 -- # waitforlisten 393006 00:33:07.155 23:46:27 -- common/autotest_common.sh@819 -- # '[' -z 393006 ']' 00:33:07.155 23:46:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.155 23:46:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:07.155 23:46:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.155 23:46:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:07.155 23:46:27 -- common/autotest_common.sh@10 -- # set +x 00:33:07.155 [2024-07-11 23:46:27.762989] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:07.155 [2024-07-11 23:46:27.763178] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393006 ] 00:33:07.155 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.155 [2024-07-11 23:46:27.862770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:07.155 [2024-07-11 23:46:27.958307] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:07.155 [2024-07-11 23:46:27.958525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.155 [2024-07-11 23:46:27.958532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.091 23:46:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:08.091 23:46:28 -- common/autotest_common.sh@852 -- # return 0 00:33:08.091 23:46:28 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:08.091 23:46:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:08.092 23:46:28 -- common/autotest_common.sh@10 -- # set +x 00:33:08.092 23:46:28 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:08.092 23:46:28 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:08.092 23:46:28 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:08.092 23:46:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:08.092 23:46:28 -- common/autotest_common.sh@10 -- # set +x 00:33:08.092 23:46:28 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:08.092 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:08.092 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:08.092 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:08.092 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:08.092 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:08.092 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:08.092 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:08.092 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:08.092 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:08.092 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:08.092 ' 00:33:08.350 [2024-07-11 23:46:29.252108] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:10.887 [2024-07-11 23:46:31.437269] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.822 [2024-07-11 23:46:32.677778] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:14.361 [2024-07-11 23:46:34.965019] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:16.271 [2024-07-11 23:46:36.939506] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:17.656 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:17.656 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:17.656 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:17.656 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:17.656 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:17.656 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:17.656 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:17.656 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:17.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:17.657 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:17.657 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:17.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:17.657 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:17.657 23:46:38 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:17.657 23:46:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:17.657 23:46:38 -- common/autotest_common.sh@10 -- # set +x 00:33:17.657 23:46:38 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:17.657 23:46:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:17.657 23:46:38 -- common/autotest_common.sh@10 -- # set +x 00:33:17.657 23:46:38 -- spdkcli/nvmf.sh@69 -- # check_match 00:33:17.657 23:46:38 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:18.226 23:46:39 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:18.226 23:46:39 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:18.226 23:46:39 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:18.226 23:46:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:18.226 23:46:39 -- common/autotest_common.sh@10 -- # set +x 00:33:18.226 23:46:39 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:18.226 23:46:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:18.226 23:46:39 -- common/autotest_common.sh@10 -- # set +x 00:33:18.226 23:46:39 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:18.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:18.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:18.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:18.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:18.226 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:18.226 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:18.226 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:18.226 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:18.226 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:18.226 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:18.226 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:18.226 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:18.226 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:18.226 ' 00:33:24.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:24.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:24.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:24.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:24.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:24.797 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:24.797 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:24.797 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:24.797 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:24.797 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:24.797 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:24.797 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:24.797 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:24.797 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:24.797 23:46:44 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:24.797 23:46:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:24.797 23:46:44 -- common/autotest_common.sh@10 -- # set +x 00:33:24.797 23:46:44 -- spdkcli/nvmf.sh@90 -- # killprocess 393006 00:33:24.797 23:46:44 -- common/autotest_common.sh@926 -- # '[' -z 393006 ']' 00:33:24.797 23:46:44 -- common/autotest_common.sh@930 -- # kill -0 393006 00:33:24.797 23:46:44 -- common/autotest_common.sh@931 -- # uname 00:33:24.797 23:46:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:24.797 23:46:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 393006 00:33:24.797 23:46:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:24.797 23:46:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:24.797 23:46:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 393006' 00:33:24.797 killing process with pid 393006 00:33:24.797 23:46:44 -- common/autotest_common.sh@945 -- # kill 393006 00:33:24.797 [2024-07-11 23:46:44.743555] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:24.797 23:46:44 -- common/autotest_common.sh@950 -- # wait 393006 00:33:24.797 23:46:44 -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:24.797 23:46:44 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:24.797 23:46:44 -- spdkcli/common.sh@13 -- # '[' -n 393006 ']' 00:33:24.797 23:46:44 -- spdkcli/common.sh@14 -- # killprocess 393006 00:33:24.797 23:46:44 -- common/autotest_common.sh@926 -- # '[' -z 393006 ']' 00:33:24.797 23:46:44 -- common/autotest_common.sh@930 -- # kill -0 393006 00:33:24.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (393006) - No such process 00:33:24.797 23:46:44 -- common/autotest_common.sh@953 -- # echo 'Process with pid 393006 is not found' 00:33:24.797 Process with pid 393006 is not found 00:33:24.797 23:46:44 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:24.797 23:46:44 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:24.797 23:46:44 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:24.797 00:33:24.797 real 0m17.407s 00:33:24.797 user 0m37.136s 00:33:24.797 sys 0m1.057s 00:33:24.797 23:46:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:24.797 23:46:44 -- common/autotest_common.sh@10 -- # set +x 00:33:24.797 ************************************ 00:33:24.797 END TEST spdkcli_nvmf_tcp 00:33:24.797 ************************************ 00:33:24.797 23:46:45 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:24.797 23:46:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:24.797 23:46:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:24.797 23:46:45 -- common/autotest_common.sh@10 -- # set +x 00:33:24.797 ************************************ 00:33:24.797 START TEST nvmf_identify_passthru 00:33:24.797 ************************************ 00:33:24.797 23:46:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:24.797 * Looking for test storage... 00:33:24.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:24.797 23:46:45 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:24.797 23:46:45 -- nvmf/common.sh@7 -- # uname -s 00:33:24.797 23:46:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:24.797 23:46:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:24.797 23:46:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:24.797 23:46:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:24.797 23:46:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:24.797 23:46:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:24.797 23:46:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:24.797 23:46:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:24.797 23:46:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:24.797 23:46:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:24.797 23:46:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:24.797 23:46:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:24.797 23:46:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:24.797 23:46:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:24.797 23:46:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:24.797 23:46:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:24.797 23:46:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.797 23:46:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.797 23:46:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.797 23:46:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.797 23:46:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.797 23:46:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.797 23:46:45 -- paths/export.sh@5 -- # export PATH 00:33:24.797 23:46:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.797 23:46:45 -- nvmf/common.sh@46 -- # : 0 00:33:24.797 23:46:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:24.797 23:46:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:24.797 23:46:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:24.797 23:46:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:24.797 23:46:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:24.797 23:46:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:24.797 23:46:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:24.797 23:46:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:24.797 23:46:45 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:24.797 23:46:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.797 23:46:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.797 23:46:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.797 23:46:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.797 23:46:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.797 23:46:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.797 23:46:45 -- paths/export.sh@5 -- # export PATH 00:33:24.797 23:46:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.797 23:46:45 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:24.797 23:46:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:24.797 23:46:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:24.797 23:46:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:24.797 23:46:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:24.797 23:46:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:24.797 23:46:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.797 23:46:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:24.797 23:46:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.797 23:46:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:24.797 23:46:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:24.797 23:46:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:24.797 23:46:45 -- common/autotest_common.sh@10 -- # set +x 00:33:26.727 23:46:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:26.727 23:46:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:26.727 23:46:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:26.727 23:46:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:26.727 23:46:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:26.727 23:46:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:26.727 23:46:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:26.727 23:46:47 -- nvmf/common.sh@294 -- # net_devs=() 00:33:26.727 23:46:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:26.727 23:46:47 -- nvmf/common.sh@295 -- # e810=() 00:33:26.727 23:46:47 -- nvmf/common.sh@295 -- # local -ga e810 00:33:26.728 23:46:47 -- nvmf/common.sh@296 -- # x722=() 00:33:26.728 23:46:47 -- nvmf/common.sh@296 -- # local -ga x722 00:33:26.728 23:46:47 -- nvmf/common.sh@297 -- # mlx=() 00:33:26.728 23:46:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:26.728 23:46:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:26.728 23:46:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:26.728 23:46:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:26.728 23:46:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:26.728 23:46:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:26.728 23:46:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:26.728 23:46:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:26.728 23:46:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:26.728 23:46:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:26.728 23:46:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:26.728 23:46:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:26.728 23:46:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:26.728 23:46:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:26.728 23:46:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:26.728 23:46:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:26.728 23:46:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:26.728 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:26.728 23:46:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:26.728 23:46:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:26.728 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:26.728 23:46:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:26.728 23:46:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:26.728 23:46:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.728 23:46:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:26.728 23:46:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.728 23:46:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:26.728 Found net devices under 0000:84:00.0: cvl_0_0 00:33:26.728 23:46:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.728 23:46:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:26.728 23:46:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.728 23:46:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:26.728 23:46:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.728 23:46:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:26.728 Found net devices under 0000:84:00.1: cvl_0_1 00:33:26.728 23:46:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.728 23:46:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:26.728 23:46:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:26.728 23:46:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:26.728 23:46:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:26.728 23:46:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:26.728 23:46:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:26.728 23:46:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:26.728 23:46:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:26.728 23:46:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:26.728 23:46:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:26.728 23:46:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:26.728 23:46:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:26.728 23:46:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:26.728 23:46:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:26.728 23:46:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:26.728 23:46:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:26.728 23:46:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:26.988 23:46:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:26.988 23:46:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:26.988 23:46:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:26.988 23:46:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:26.988 23:46:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:26.988 23:46:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:26.988 23:46:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:26.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:26.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:33:26.988 00:33:26.988 --- 10.0.0.2 ping statistics --- 00:33:26.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.988 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:33:26.988 23:46:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:26.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:26.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:33:26.988 00:33:26.988 --- 10.0.0.1 ping statistics --- 00:33:26.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.988 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:33:26.988 23:46:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:26.988 23:46:47 -- nvmf/common.sh@410 -- # return 0 00:33:26.988 23:46:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:26.988 23:46:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:26.988 23:46:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:26.988 23:46:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:26.988 23:46:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:26.988 23:46:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:26.988 23:46:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:26.988 23:46:47 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:26.988 23:46:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:26.988 23:46:47 -- common/autotest_common.sh@10 -- # set +x 00:33:26.988 23:46:47 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:26.988 23:46:47 -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:26.988 23:46:47 -- common/autotest_common.sh@1509 -- # local bdfs 00:33:26.988 23:46:47 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:26.988 23:46:47 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:26.988 23:46:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:26.988 23:46:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:33:26.988 23:46:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:26.988 23:46:47 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:26.988 23:46:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:26.988 23:46:47 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:26.988 23:46:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:33:26.988 23:46:47 -- common/autotest_common.sh@1512 -- # echo 0000:82:00.0 00:33:26.988 23:46:47 -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:33:26.988 23:46:47 -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:33:26.988 23:46:47 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:33:26.988 23:46:47 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:26.988 23:46:47 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:27.247 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.430 23:46:52 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:33:31.430 23:46:52 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:33:31.430 23:46:52 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:31.430 23:46:52 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:31.430 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.614 23:46:56 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:35.614 23:46:56 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:35.614 23:46:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:35.614 23:46:56 -- common/autotest_common.sh@10 -- # set +x 00:33:35.614 23:46:56 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:35.614 23:46:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:35.614 23:46:56 -- common/autotest_common.sh@10 -- # set +x 00:33:35.614 23:46:56 -- target/identify_passthru.sh@31 -- # nvmfpid=397848 00:33:35.614 23:46:56 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:35.614 23:46:56 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:35.614 23:46:56 -- target/identify_passthru.sh@35 -- # waitforlisten 397848 00:33:35.614 23:46:56 -- common/autotest_common.sh@819 -- # '[' -z 397848 ']' 00:33:35.614 23:46:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.614 23:46:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:35.614 23:46:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.614 23:46:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:35.614 23:46:56 -- common/autotest_common.sh@10 -- # set +x 00:33:35.614 [2024-07-11 23:46:56.473724] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:35.614 [2024-07-11 23:46:56.473812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:35.614 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.614 [2024-07-11 23:46:56.552973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:35.872 [2024-07-11 23:46:56.648510] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:35.872 [2024-07-11 23:46:56.648693] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:35.872 [2024-07-11 23:46:56.648714] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:35.872 [2024-07-11 23:46:56.648729] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:35.872 [2024-07-11 23:46:56.648840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.872 [2024-07-11 23:46:56.648918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:35.872 [2024-07-11 23:46:56.648967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:35.872 [2024-07-11 23:46:56.648969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.130 23:46:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:36.130 23:46:56 -- common/autotest_common.sh@852 -- # return 0 00:33:36.130 23:46:56 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:36.130 23:46:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.130 23:46:56 -- common/autotest_common.sh@10 -- # set +x 00:33:36.130 INFO: Log level set to 20 00:33:36.130 INFO: Requests: 00:33:36.130 { 00:33:36.130 "jsonrpc": "2.0", 00:33:36.130 "method": "nvmf_set_config", 00:33:36.130 "id": 1, 00:33:36.130 "params": { 00:33:36.130 "admin_cmd_passthru": { 00:33:36.130 "identify_ctrlr": true 00:33:36.130 } 00:33:36.130 } 00:33:36.130 } 00:33:36.130 00:33:36.130 INFO: response: 00:33:36.130 { 00:33:36.131 "jsonrpc": "2.0", 00:33:36.131 "id": 1, 00:33:36.131 "result": true 00:33:36.131 } 00:33:36.131 00:33:36.131 23:46:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.131 23:46:56 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:36.131 23:46:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.131 23:46:56 -- common/autotest_common.sh@10 -- # set +x 00:33:36.131 INFO: Setting log level to 20 00:33:36.131 INFO: Setting log level to 20 00:33:36.131 INFO: Log level set to 20 00:33:36.131 INFO: Log level set to 20 00:33:36.131 INFO: Requests: 00:33:36.131 { 00:33:36.131 "jsonrpc": "2.0", 00:33:36.131 "method": "framework_start_init", 00:33:36.131 "id": 1 00:33:36.131 } 00:33:36.131 00:33:36.131 INFO: Requests: 00:33:36.131 { 00:33:36.131 "jsonrpc": "2.0", 00:33:36.131 "method": "framework_start_init", 00:33:36.131 "id": 1 00:33:36.131 } 00:33:36.131 00:33:36.131 [2024-07-11 23:46:56.996592] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:36.131 INFO: response: 00:33:36.131 { 00:33:36.131 "jsonrpc": "2.0", 00:33:36.131 "id": 1, 00:33:36.131 "result": true 00:33:36.131 } 00:33:36.131 00:33:36.131 INFO: response: 00:33:36.131 { 00:33:36.131 "jsonrpc": "2.0", 00:33:36.131 "id": 1, 00:33:36.131 "result": true 00:33:36.131 } 00:33:36.131 00:33:36.131 23:46:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.131 23:46:57 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:36.131 23:46:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.131 23:46:57 -- common/autotest_common.sh@10 -- # set +x 00:33:36.131 INFO: Setting log level to 40 00:33:36.131 INFO: Setting log level to 40 00:33:36.131 INFO: Setting log level to 40 00:33:36.131 [2024-07-11 23:46:57.006772] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.131 23:46:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.131 23:46:57 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:36.131 23:46:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:36.131 23:46:57 -- common/autotest_common.sh@10 -- # set +x 00:33:36.131 23:46:57 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:33:36.131 23:46:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.131 23:46:57 -- common/autotest_common.sh@10 -- # set +x 00:33:39.414 Nvme0n1 00:33:39.414 23:46:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.414 23:46:59 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:39.414 23:46:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:39.414 23:46:59 -- common/autotest_common.sh@10 -- # set +x 00:33:39.414 23:46:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.414 23:46:59 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:39.414 23:46:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:39.414 23:46:59 -- common/autotest_common.sh@10 -- # set +x 00:33:39.414 23:46:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.414 23:46:59 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:39.414 23:46:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:39.414 23:46:59 -- common/autotest_common.sh@10 -- # set +x 00:33:39.414 [2024-07-11 23:46:59.910540] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.414 23:46:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.414 23:46:59 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:39.414 23:46:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:39.414 23:46:59 -- common/autotest_common.sh@10 -- # set +x 00:33:39.414 [2024-07-11 23:46:59.918241] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:39.414 [ 00:33:39.414 { 00:33:39.414 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:39.414 "subtype": "Discovery", 00:33:39.414 "listen_addresses": [], 00:33:39.414 "allow_any_host": true, 00:33:39.414 "hosts": [] 00:33:39.414 }, 00:33:39.414 { 00:33:39.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:39.414 "subtype": "NVMe", 00:33:39.414 "listen_addresses": [ 00:33:39.414 { 00:33:39.414 "transport": "TCP", 00:33:39.415 "trtype": "TCP", 00:33:39.415 "adrfam": "IPv4", 00:33:39.415 "traddr": "10.0.0.2", 00:33:39.415 "trsvcid": "4420" 00:33:39.415 } 00:33:39.415 ], 00:33:39.415 "allow_any_host": true, 00:33:39.415 "hosts": [], 00:33:39.415 "serial_number": "SPDK00000000000001", 00:33:39.415 "model_number": "SPDK bdev Controller", 00:33:39.415 "max_namespaces": 1, 00:33:39.415 "min_cntlid": 1, 00:33:39.415 "max_cntlid": 65519, 00:33:39.415 "namespaces": [ 00:33:39.415 { 00:33:39.415 "nsid": 1, 00:33:39.415 "bdev_name": "Nvme0n1", 00:33:39.415 "name": "Nvme0n1", 00:33:39.415 "nguid": "A47B4FA080484EBC89F7BFC6E81321C6", 00:33:39.415 "uuid": "a47b4fa0-8048-4ebc-89f7-bfc6e81321c6" 00:33:39.415 } 00:33:39.415 ] 00:33:39.415 } 00:33:39.415 ] 00:33:39.415 23:46:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.415 23:46:59 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:39.415 23:46:59 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:39.415 23:46:59 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:39.415 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.415 23:47:00 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:33:39.415 23:47:00 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:39.415 23:47:00 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:39.415 23:47:00 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:39.415 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.415 23:47:00 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:39.415 23:47:00 -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:33:39.415 23:47:00 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:39.415 23:47:00 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:39.415 23:47:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:39.415 23:47:00 -- common/autotest_common.sh@10 -- # set +x 00:33:39.415 23:47:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.415 23:47:00 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:39.415 23:47:00 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:39.415 23:47:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:39.415 23:47:00 -- nvmf/common.sh@116 -- # sync 00:33:39.415 23:47:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:39.415 23:47:00 -- nvmf/common.sh@119 -- # set +e 00:33:39.415 23:47:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:39.415 23:47:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:39.415 rmmod nvme_tcp 00:33:39.415 rmmod nvme_fabrics 00:33:39.415 rmmod nvme_keyring 00:33:39.415 23:47:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:39.415 23:47:00 -- nvmf/common.sh@123 -- # set -e 00:33:39.415 23:47:00 -- nvmf/common.sh@124 -- # return 0 00:33:39.415 23:47:00 -- nvmf/common.sh@477 -- # '[' -n 397848 ']' 00:33:39.415 23:47:00 -- nvmf/common.sh@478 -- # killprocess 397848 00:33:39.415 23:47:00 -- common/autotest_common.sh@926 -- # '[' -z 397848 ']' 00:33:39.415 23:47:00 -- common/autotest_common.sh@930 -- # kill -0 397848 00:33:39.415 23:47:00 -- common/autotest_common.sh@931 -- # uname 00:33:39.415 23:47:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:39.415 23:47:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 397848 00:33:39.415 23:47:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:39.415 23:47:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:39.415 23:47:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 397848' 00:33:39.415 killing process with pid 397848 00:33:39.415 23:47:00 -- common/autotest_common.sh@945 -- # kill 397848 00:33:39.415 [2024-07-11 23:47:00.289822] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:39.415 23:47:00 -- common/autotest_common.sh@950 -- # wait 397848 00:33:41.314 23:47:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:41.314 23:47:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:41.314 23:47:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:41.314 23:47:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:41.314 23:47:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:41.314 23:47:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.314 23:47:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:41.314 23:47:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.214 23:47:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:43.214 00:33:43.214 real 0m18.891s 00:33:43.214 user 0m27.488s 00:33:43.214 sys 0m3.063s 00:33:43.214 23:47:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:43.214 23:47:03 -- common/autotest_common.sh@10 -- # set +x 00:33:43.214 ************************************ 00:33:43.214 END TEST nvmf_identify_passthru 00:33:43.214 ************************************ 00:33:43.214 23:47:03 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:43.214 23:47:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:43.214 23:47:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:43.214 23:47:03 -- common/autotest_common.sh@10 -- # set +x 00:33:43.214 ************************************ 00:33:43.214 START TEST nvmf_dif 00:33:43.214 ************************************ 00:33:43.214 23:47:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:43.214 * Looking for test storage... 00:33:43.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:43.214 23:47:04 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:43.214 23:47:04 -- nvmf/common.sh@7 -- # uname -s 00:33:43.214 23:47:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.214 23:47:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.214 23:47:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.214 23:47:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.214 23:47:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.214 23:47:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.214 23:47:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.214 23:47:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.214 23:47:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.214 23:47:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.214 23:47:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:43.214 23:47:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:43.214 23:47:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.214 23:47:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.214 23:47:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:43.214 23:47:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:43.214 23:47:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.214 23:47:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.214 23:47:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.214 23:47:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.214 23:47:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.214 23:47:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.214 23:47:04 -- paths/export.sh@5 -- # export PATH 00:33:43.214 23:47:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.214 23:47:04 -- nvmf/common.sh@46 -- # : 0 00:33:43.214 23:47:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:43.214 23:47:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:43.214 23:47:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:43.214 23:47:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.214 23:47:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.214 23:47:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:43.214 23:47:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:43.214 23:47:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:43.214 23:47:04 -- target/dif.sh@15 -- # NULL_META=16 00:33:43.214 23:47:04 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:43.214 23:47:04 -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:43.214 23:47:04 -- target/dif.sh@15 -- # NULL_DIF=1 00:33:43.214 23:47:04 -- target/dif.sh@135 -- # nvmftestinit 00:33:43.214 23:47:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:43.214 23:47:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.214 23:47:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:43.214 23:47:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:43.214 23:47:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:43.214 23:47:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.214 23:47:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:43.214 23:47:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.214 23:47:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:43.214 23:47:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:43.214 23:47:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:43.214 23:47:04 -- common/autotest_common.sh@10 -- # set +x 00:33:45.743 23:47:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:45.743 23:47:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:45.743 23:47:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:45.743 23:47:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:45.743 23:47:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:45.743 23:47:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:45.743 23:47:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:45.743 23:47:06 -- nvmf/common.sh@294 -- # net_devs=() 00:33:45.743 23:47:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:45.743 23:47:06 -- nvmf/common.sh@295 -- # e810=() 00:33:45.743 23:47:06 -- nvmf/common.sh@295 -- # local -ga e810 00:33:45.743 23:47:06 -- nvmf/common.sh@296 -- # x722=() 00:33:45.743 23:47:06 -- nvmf/common.sh@296 -- # local -ga x722 00:33:45.743 23:47:06 -- nvmf/common.sh@297 -- # mlx=() 00:33:45.743 23:47:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:45.743 23:47:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:45.743 23:47:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:45.743 23:47:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:45.743 23:47:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:45.743 23:47:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:45.743 23:47:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:45.743 23:47:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:45.743 23:47:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:45.743 23:47:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:45.743 23:47:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:45.744 23:47:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:45.744 23:47:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:45.744 23:47:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:45.744 23:47:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:45.744 23:47:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:45.744 23:47:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:45.744 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:45.744 23:47:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:45.744 23:47:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:45.744 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:45.744 23:47:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:45.744 23:47:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:45.744 23:47:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.744 23:47:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:45.744 23:47:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.744 23:47:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:45.744 Found net devices under 0000:84:00.0: cvl_0_0 00:33:45.744 23:47:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.744 23:47:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:45.744 23:47:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.744 23:47:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:45.744 23:47:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.744 23:47:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:45.744 Found net devices under 0000:84:00.1: cvl_0_1 00:33:45.744 23:47:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.744 23:47:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:45.744 23:47:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:45.744 23:47:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:45.744 23:47:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:45.744 23:47:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:45.744 23:47:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:45.744 23:47:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:45.744 23:47:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:45.744 23:47:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:45.744 23:47:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:45.744 23:47:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:45.744 23:47:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:45.744 23:47:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:45.744 23:47:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:45.744 23:47:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:45.744 23:47:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:45.744 23:47:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.002 23:47:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.002 23:47:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.002 23:47:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:46.002 23:47:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.002 23:47:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.002 23:47:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.002 23:47:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:46.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:33:46.002 00:33:46.002 --- 10.0.0.2 ping statistics --- 00:33:46.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.002 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:33:46.002 23:47:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:33:46.002 00:33:46.002 --- 10.0.0.1 ping statistics --- 00:33:46.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.002 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:33:46.002 23:47:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.002 23:47:06 -- nvmf/common.sh@410 -- # return 0 00:33:46.002 23:47:06 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:33:46.002 23:47:06 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:47.376 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:47.376 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:47.376 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:47.376 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:47.376 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:47.376 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:47.376 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:47.634 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:47.634 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:47.634 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:47.634 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:47.634 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:47.634 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:47.635 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:47.635 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:47.635 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:47.635 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:47.635 23:47:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.635 23:47:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:47.635 23:47:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:47.635 23:47:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.635 23:47:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:47.635 23:47:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:47.635 23:47:08 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:47.635 23:47:08 -- target/dif.sh@137 -- # nvmfappstart 00:33:47.635 23:47:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:47.635 23:47:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:47.635 23:47:08 -- common/autotest_common.sh@10 -- # set +x 00:33:47.635 23:47:08 -- nvmf/common.sh@469 -- # nvmfpid=401196 00:33:47.635 23:47:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:47.635 23:47:08 -- nvmf/common.sh@470 -- # waitforlisten 401196 00:33:47.635 23:47:08 -- common/autotest_common.sh@819 -- # '[' -z 401196 ']' 00:33:47.635 23:47:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.635 23:47:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:47.635 23:47:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.635 23:47:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:47.635 23:47:08 -- common/autotest_common.sh@10 -- # set +x 00:33:47.892 [2024-07-11 23:47:08.598346] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:47.893 [2024-07-11 23:47:08.598430] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.893 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.893 [2024-07-11 23:47:08.678892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.893 [2024-07-11 23:47:08.774290] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:47.893 [2024-07-11 23:47:08.774458] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.893 [2024-07-11 23:47:08.774477] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.893 [2024-07-11 23:47:08.774491] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.893 [2024-07-11 23:47:08.774521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.834 23:47:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:48.834 23:47:09 -- common/autotest_common.sh@852 -- # return 0 00:33:48.834 23:47:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:48.834 23:47:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:48.834 23:47:09 -- common/autotest_common.sh@10 -- # set +x 00:33:48.834 23:47:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.834 23:47:09 -- target/dif.sh@139 -- # create_transport 00:33:48.834 23:47:09 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:48.834 23:47:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:48.835 23:47:09 -- common/autotest_common.sh@10 -- # set +x 00:33:48.835 [2024-07-11 23:47:09.690837] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.835 23:47:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:48.835 23:47:09 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:48.835 23:47:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:48.835 23:47:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:48.835 23:47:09 -- common/autotest_common.sh@10 -- # set +x 00:33:48.835 ************************************ 00:33:48.835 START TEST fio_dif_1_default 00:33:48.835 ************************************ 00:33:48.835 23:47:09 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:33:48.835 23:47:09 -- target/dif.sh@86 -- # create_subsystems 0 00:33:48.835 23:47:09 -- target/dif.sh@28 -- # local sub 00:33:48.835 23:47:09 -- target/dif.sh@30 -- # for sub in "$@" 00:33:48.835 23:47:09 -- target/dif.sh@31 -- # create_subsystem 0 00:33:48.835 23:47:09 -- target/dif.sh@18 -- # local sub_id=0 00:33:48.835 23:47:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:48.835 23:47:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:48.835 23:47:09 -- common/autotest_common.sh@10 -- # set +x 00:33:48.835 bdev_null0 00:33:48.835 23:47:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:48.835 23:47:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:48.835 23:47:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:48.835 23:47:09 -- common/autotest_common.sh@10 -- # set +x 00:33:48.835 23:47:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:48.835 23:47:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:48.835 23:47:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:48.835 23:47:09 -- common/autotest_common.sh@10 -- # set +x 00:33:48.835 23:47:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:48.835 23:47:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:48.835 23:47:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:48.835 23:47:09 -- common/autotest_common.sh@10 -- # set +x 00:33:48.835 [2024-07-11 23:47:09.731137] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.835 23:47:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:48.835 23:47:09 -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:48.835 23:47:09 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:48.835 23:47:09 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:48.835 23:47:09 -- nvmf/common.sh@520 -- # config=() 00:33:48.835 23:47:09 -- nvmf/common.sh@520 -- # local subsystem config 00:33:48.835 23:47:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:48.835 23:47:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:48.835 { 00:33:48.835 "params": { 00:33:48.835 "name": "Nvme$subsystem", 00:33:48.835 "trtype": "$TEST_TRANSPORT", 00:33:48.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:48.835 "adrfam": "ipv4", 00:33:48.835 "trsvcid": "$NVMF_PORT", 00:33:48.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:48.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:48.835 "hdgst": ${hdgst:-false}, 00:33:48.835 "ddgst": ${ddgst:-false} 00:33:48.835 }, 00:33:48.835 "method": "bdev_nvme_attach_controller" 00:33:48.835 } 00:33:48.835 EOF 00:33:48.835 )") 00:33:48.835 23:47:09 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.835 23:47:09 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.835 23:47:09 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:48.835 23:47:09 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:48.835 23:47:09 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:48.835 23:47:09 -- target/dif.sh@82 -- # gen_fio_conf 00:33:48.835 23:47:09 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.835 23:47:09 -- common/autotest_common.sh@1320 -- # shift 00:33:48.835 23:47:09 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:48.835 23:47:09 -- target/dif.sh@54 -- # local file 00:33:48.835 23:47:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.835 23:47:09 -- target/dif.sh@56 -- # cat 00:33:48.835 23:47:09 -- nvmf/common.sh@542 -- # cat 00:33:48.835 23:47:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.835 23:47:09 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:48.835 23:47:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:48.835 23:47:09 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:48.835 23:47:09 -- target/dif.sh@72 -- # (( file <= files )) 00:33:48.835 23:47:09 -- nvmf/common.sh@544 -- # jq . 00:33:48.835 23:47:09 -- nvmf/common.sh@545 -- # IFS=, 00:33:48.835 23:47:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:48.835 "params": { 00:33:48.835 "name": "Nvme0", 00:33:48.835 "trtype": "tcp", 00:33:48.835 "traddr": "10.0.0.2", 00:33:48.835 "adrfam": "ipv4", 00:33:48.835 "trsvcid": "4420", 00:33:48.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:48.835 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:48.835 "hdgst": false, 00:33:48.835 "ddgst": false 00:33:48.835 }, 00:33:48.835 "method": "bdev_nvme_attach_controller" 00:33:48.835 }' 00:33:48.835 23:47:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:48.835 23:47:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:48.835 23:47:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.835 23:47:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.835 23:47:09 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:48.835 23:47:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:49.094 23:47:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:49.094 23:47:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:49.094 23:47:09 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:49.094 23:47:09 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.094 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:49.094 fio-3.35 00:33:49.094 Starting 1 thread 00:33:49.352 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.918 [2024-07-11 23:47:10.577815] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:49.918 [2024-07-11 23:47:10.577902] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:59.878 00:33:59.878 filename0: (groupid=0, jobs=1): err= 0: pid=401555: Thu Jul 11 23:47:20 2024 00:33:59.878 read: IOPS=186, BW=745KiB/s (763kB/s)(7456KiB/10004msec) 00:33:59.878 slat (nsec): min=5661, max=72021, avg=9138.92, stdev=2811.89 00:33:59.878 clat (usec): min=789, max=43596, avg=21438.74, stdev=20509.04 00:33:59.878 lat (usec): min=797, max=43627, avg=21447.87, stdev=20508.92 00:33:59.878 clat percentiles (usec): 00:33:59.878 | 1.00th=[ 807], 5.00th=[ 824], 10.00th=[ 832], 20.00th=[ 840], 00:33:59.879 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[41157], 60.00th=[41157], 00:33:59.879 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:59.879 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:33:59.879 | 99.99th=[43779] 00:33:59.879 bw ( KiB/s): min= 672, max= 768, per=99.83%, avg=744.00, stdev=34.24, samples=20 00:33:59.879 iops : min= 168, max= 192, avg=186.00, stdev= 8.56, samples=20 00:33:59.879 lat (usec) : 1000=49.79% 00:33:59.879 lat (msec) : 50=50.21% 00:33:59.879 cpu : usr=90.93%, sys=8.77%, ctx=19, majf=0, minf=293 00:33:59.879 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:59.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.879 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.879 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:59.879 00:33:59.879 Run status group 0 (all jobs): 00:33:59.879 READ: bw=745KiB/s (763kB/s), 745KiB/s-745KiB/s (763kB/s-763kB/s), io=7456KiB (7635kB), run=10004-10004msec 00:34:00.137 23:47:21 -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:00.137 23:47:21 -- target/dif.sh@43 -- # local sub 00:34:00.137 23:47:21 -- target/dif.sh@45 -- # for sub in "$@" 00:34:00.137 23:47:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:00.137 23:47:21 -- target/dif.sh@36 -- # local sub_id=0 00:34:00.137 23:47:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:00.137 23:47:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.137 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:34:00.137 23:47:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.137 23:47:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:00.137 23:47:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.137 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:34:00.137 23:47:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.137 00:34:00.137 real 0m11.352s 00:34:00.137 user 0m10.352s 00:34:00.137 sys 0m1.217s 00:34:00.137 23:47:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:00.137 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:34:00.137 ************************************ 00:34:00.137 END TEST fio_dif_1_default 00:34:00.137 ************************************ 00:34:00.137 23:47:21 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:00.137 23:47:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:00.137 23:47:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:00.137 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:34:00.137 ************************************ 00:34:00.137 START TEST fio_dif_1_multi_subsystems 00:34:00.137 ************************************ 00:34:00.137 23:47:21 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:34:00.137 23:47:21 -- target/dif.sh@92 -- # local files=1 00:34:00.137 23:47:21 -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:00.137 23:47:21 -- target/dif.sh@28 -- # local sub 00:34:00.137 23:47:21 -- target/dif.sh@30 -- # for sub in "$@" 00:34:00.137 23:47:21 -- target/dif.sh@31 -- # create_subsystem 0 00:34:00.137 23:47:21 -- target/dif.sh@18 -- # local sub_id=0 00:34:00.137 23:47:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:00.137 23:47:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.137 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:34:00.397 bdev_null0 00:34:00.397 23:47:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.397 23:47:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:00.397 23:47:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.397 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:34:00.397 23:47:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.397 23:47:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:00.397 23:47:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.397 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:34:00.397 23:47:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.397 23:47:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:00.397 23:47:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.397 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:34:00.397 [2024-07-11 23:47:21.111606] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:00.397 23:47:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.397 23:47:21 -- target/dif.sh@30 -- # for sub in "$@" 00:34:00.397 23:47:21 -- target/dif.sh@31 -- # create_subsystem 1 00:34:00.397 23:47:21 -- target/dif.sh@18 -- # local sub_id=1 00:34:00.397 23:47:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:00.397 23:47:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.397 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:34:00.397 bdev_null1 00:34:00.397 23:47:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.397 23:47:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:00.397 23:47:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.397 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:34:00.397 23:47:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.397 23:47:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:00.397 23:47:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.397 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:34:00.397 23:47:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.397 23:47:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:00.397 23:47:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.397 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:34:00.397 23:47:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.397 23:47:21 -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:00.397 23:47:21 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:00.397 23:47:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:00.397 23:47:21 -- nvmf/common.sh@520 -- # config=() 00:34:00.397 23:47:21 -- nvmf/common.sh@520 -- # local subsystem config 00:34:00.397 23:47:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:00.397 23:47:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.397 23:47:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:00.397 { 00:34:00.397 "params": { 00:34:00.397 "name": "Nvme$subsystem", 00:34:00.397 "trtype": "$TEST_TRANSPORT", 00:34:00.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:00.397 "adrfam": "ipv4", 00:34:00.397 "trsvcid": "$NVMF_PORT", 00:34:00.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:00.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:00.397 "hdgst": ${hdgst:-false}, 00:34:00.397 "ddgst": ${ddgst:-false} 00:34:00.397 }, 00:34:00.397 "method": "bdev_nvme_attach_controller" 00:34:00.397 } 00:34:00.397 EOF 00:34:00.397 )") 00:34:00.397 23:47:21 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.397 23:47:21 -- target/dif.sh@82 -- # gen_fio_conf 00:34:00.397 23:47:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:00.397 23:47:21 -- target/dif.sh@54 -- # local file 00:34:00.397 23:47:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:00.397 23:47:21 -- target/dif.sh@56 -- # cat 00:34:00.397 23:47:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:00.397 23:47:21 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:00.397 23:47:21 -- common/autotest_common.sh@1320 -- # shift 00:34:00.397 23:47:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:00.397 23:47:21 -- nvmf/common.sh@542 -- # cat 00:34:00.397 23:47:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:00.397 23:47:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:00.397 23:47:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:00.398 23:47:21 -- target/dif.sh@72 -- # (( file <= files )) 00:34:00.398 23:47:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:00.398 23:47:21 -- target/dif.sh@73 -- # cat 00:34:00.398 23:47:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:00.398 23:47:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:00.398 23:47:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:00.398 { 00:34:00.398 "params": { 00:34:00.398 "name": "Nvme$subsystem", 00:34:00.398 "trtype": "$TEST_TRANSPORT", 00:34:00.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:00.398 "adrfam": "ipv4", 00:34:00.398 "trsvcid": "$NVMF_PORT", 00:34:00.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:00.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:00.398 "hdgst": ${hdgst:-false}, 00:34:00.398 "ddgst": ${ddgst:-false} 00:34:00.398 }, 00:34:00.398 "method": "bdev_nvme_attach_controller" 00:34:00.398 } 00:34:00.398 EOF 00:34:00.398 )") 00:34:00.398 23:47:21 -- nvmf/common.sh@542 -- # cat 00:34:00.398 23:47:21 -- target/dif.sh@72 -- # (( file++ )) 00:34:00.398 23:47:21 -- target/dif.sh@72 -- # (( file <= files )) 00:34:00.398 23:47:21 -- nvmf/common.sh@544 -- # jq . 00:34:00.398 23:47:21 -- nvmf/common.sh@545 -- # IFS=, 00:34:00.398 23:47:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:00.398 "params": { 00:34:00.398 "name": "Nvme0", 00:34:00.398 "trtype": "tcp", 00:34:00.398 "traddr": "10.0.0.2", 00:34:00.398 "adrfam": "ipv4", 00:34:00.398 "trsvcid": "4420", 00:34:00.398 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:00.398 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:00.398 "hdgst": false, 00:34:00.398 "ddgst": false 00:34:00.398 }, 00:34:00.398 "method": "bdev_nvme_attach_controller" 00:34:00.398 },{ 00:34:00.398 "params": { 00:34:00.398 "name": "Nvme1", 00:34:00.398 "trtype": "tcp", 00:34:00.398 "traddr": "10.0.0.2", 00:34:00.398 "adrfam": "ipv4", 00:34:00.398 "trsvcid": "4420", 00:34:00.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:00.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:00.398 "hdgst": false, 00:34:00.398 "ddgst": false 00:34:00.398 }, 00:34:00.398 "method": "bdev_nvme_attach_controller" 00:34:00.398 }' 00:34:00.398 23:47:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:00.398 23:47:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:00.398 23:47:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:00.398 23:47:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:00.398 23:47:21 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:00.398 23:47:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:00.398 23:47:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:00.398 23:47:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:00.398 23:47:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:00.398 23:47:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.658 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:00.658 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:00.658 fio-3.35 00:34:00.658 Starting 2 threads 00:34:00.658 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.226 [2024-07-11 23:47:22.063266] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:01.226 [2024-07-11 23:47:22.063349] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:13.413 00:34:13.413 filename0: (groupid=0, jobs=1): err= 0: pid=402993: Thu Jul 11 23:47:32 2024 00:34:13.413 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10009msec) 00:34:13.413 slat (usec): min=4, max=102, avg=10.53, stdev= 4.09 00:34:13.413 clat (usec): min=40919, max=47671, avg=41844.36, stdev=563.00 00:34:13.413 lat (usec): min=40928, max=47700, avg=41854.89, stdev=563.09 00:34:13.413 clat percentiles (usec): 00:34:13.413 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:13.413 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:13.413 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:13.413 | 99.00th=[42730], 99.50th=[42730], 99.90th=[47449], 99.95th=[47449], 00:34:13.413 | 99.99th=[47449] 00:34:13.413 bw ( KiB/s): min= 352, max= 384, per=49.69%, avg=380.80, stdev= 9.85, samples=20 00:34:13.413 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:34:13.413 lat (msec) : 50=100.00% 00:34:13.413 cpu : usr=94.66%, sys=4.99%, ctx=14, majf=0, minf=200 00:34:13.413 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.413 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.413 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:13.413 filename1: (groupid=0, jobs=1): err= 0: pid=402994: Thu Jul 11 23:47:32 2024 00:34:13.413 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10021msec) 00:34:13.413 slat (nsec): min=5599, max=36242, avg=10797.70, stdev=3017.24 00:34:13.413 clat (usec): min=40909, max=45811, avg=41719.68, stdev=517.61 00:34:13.413 lat (usec): min=40918, max=45841, avg=41730.48, stdev=517.74 00:34:13.413 clat percentiles (usec): 00:34:13.413 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:13.413 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:13.413 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:13.413 | 99.00th=[42206], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:34:13.413 | 99.99th=[45876] 00:34:13.413 bw ( KiB/s): min= 352, max= 384, per=49.95%, avg=382.40, stdev= 7.16, samples=20 00:34:13.413 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:34:13.413 lat (msec) : 50=100.00% 00:34:13.413 cpu : usr=94.95%, sys=4.72%, ctx=16, majf=0, minf=205 00:34:13.413 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.413 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.413 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:13.413 00:34:13.413 Run status group 0 (all jobs): 00:34:13.413 READ: bw=765KiB/s (783kB/s), 382KiB/s-383KiB/s (391kB/s-392kB/s), io=7664KiB (7848kB), run=10009-10021msec 00:34:13.413 23:47:32 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:13.413 23:47:32 -- target/dif.sh@43 -- # local sub 00:34:13.413 23:47:32 -- target/dif.sh@45 -- # for sub in "$@" 00:34:13.413 23:47:32 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:13.413 23:47:32 -- target/dif.sh@36 -- # local sub_id=0 00:34:13.413 23:47:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:13.413 23:47:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:13.413 23:47:32 -- common/autotest_common.sh@10 -- # set +x 00:34:13.413 23:47:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:13.413 23:47:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:13.413 23:47:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:13.413 23:47:32 -- common/autotest_common.sh@10 -- # set +x 00:34:13.413 23:47:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:13.413 23:47:32 -- target/dif.sh@45 -- # for sub in "$@" 00:34:13.413 23:47:32 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:13.413 23:47:32 -- target/dif.sh@36 -- # local sub_id=1 00:34:13.413 23:47:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:13.413 23:47:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:13.413 23:47:32 -- common/autotest_common.sh@10 -- # set +x 00:34:13.413 23:47:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:13.413 23:47:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:13.413 23:47:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:13.413 23:47:32 -- common/autotest_common.sh@10 -- # set +x 00:34:13.413 23:47:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:13.413 00:34:13.413 real 0m11.360s 00:34:13.413 user 0m20.321s 00:34:13.413 sys 0m1.266s 00:34:13.413 23:47:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:13.413 23:47:32 -- common/autotest_common.sh@10 -- # set +x 00:34:13.413 ************************************ 00:34:13.413 END TEST fio_dif_1_multi_subsystems 00:34:13.413 ************************************ 00:34:13.413 23:47:32 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:13.413 23:47:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:13.413 23:47:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:13.413 23:47:32 -- common/autotest_common.sh@10 -- # set +x 00:34:13.413 ************************************ 00:34:13.413 START TEST fio_dif_rand_params 00:34:13.413 ************************************ 00:34:13.413 23:47:32 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:34:13.413 23:47:32 -- target/dif.sh@100 -- # local NULL_DIF 00:34:13.413 23:47:32 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:13.413 23:47:32 -- target/dif.sh@103 -- # NULL_DIF=3 00:34:13.413 23:47:32 -- target/dif.sh@103 -- # bs=128k 00:34:13.413 23:47:32 -- target/dif.sh@103 -- # numjobs=3 00:34:13.413 23:47:32 -- target/dif.sh@103 -- # iodepth=3 00:34:13.413 23:47:32 -- target/dif.sh@103 -- # runtime=5 00:34:13.413 23:47:32 -- target/dif.sh@105 -- # create_subsystems 0 00:34:13.413 23:47:32 -- target/dif.sh@28 -- # local sub 00:34:13.413 23:47:32 -- target/dif.sh@30 -- # for sub in "$@" 00:34:13.413 23:47:32 -- target/dif.sh@31 -- # create_subsystem 0 00:34:13.413 23:47:32 -- target/dif.sh@18 -- # local sub_id=0 00:34:13.413 23:47:32 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:13.413 23:47:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:13.413 23:47:32 -- common/autotest_common.sh@10 -- # set +x 00:34:13.413 bdev_null0 00:34:13.413 23:47:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:13.413 23:47:32 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:13.413 23:47:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:13.413 23:47:32 -- common/autotest_common.sh@10 -- # set +x 00:34:13.413 23:47:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:13.413 23:47:32 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:13.413 23:47:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:13.413 23:47:32 -- common/autotest_common.sh@10 -- # set +x 00:34:13.413 23:47:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:13.413 23:47:32 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:13.413 23:47:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:13.413 23:47:32 -- common/autotest_common.sh@10 -- # set +x 00:34:13.413 [2024-07-11 23:47:32.498864] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:13.413 23:47:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:13.413 23:47:32 -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:13.413 23:47:32 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:13.413 23:47:32 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:13.413 23:47:32 -- nvmf/common.sh@520 -- # config=() 00:34:13.413 23:47:32 -- nvmf/common.sh@520 -- # local subsystem config 00:34:13.413 23:47:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:13.413 23:47:32 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:13.413 23:47:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:13.413 { 00:34:13.413 "params": { 00:34:13.413 "name": "Nvme$subsystem", 00:34:13.413 "trtype": "$TEST_TRANSPORT", 00:34:13.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:13.413 "adrfam": "ipv4", 00:34:13.413 "trsvcid": "$NVMF_PORT", 00:34:13.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:13.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:13.414 "hdgst": ${hdgst:-false}, 00:34:13.414 "ddgst": ${ddgst:-false} 00:34:13.414 }, 00:34:13.414 "method": "bdev_nvme_attach_controller" 00:34:13.414 } 00:34:13.414 EOF 00:34:13.414 )") 00:34:13.414 23:47:32 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:13.414 23:47:32 -- target/dif.sh@82 -- # gen_fio_conf 00:34:13.414 23:47:32 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:13.414 23:47:32 -- target/dif.sh@54 -- # local file 00:34:13.414 23:47:32 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:13.414 23:47:32 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:13.414 23:47:32 -- target/dif.sh@56 -- # cat 00:34:13.414 23:47:32 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:13.414 23:47:32 -- common/autotest_common.sh@1320 -- # shift 00:34:13.414 23:47:32 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:13.414 23:47:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:13.414 23:47:32 -- nvmf/common.sh@542 -- # cat 00:34:13.414 23:47:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:13.414 23:47:32 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:13.414 23:47:32 -- target/dif.sh@72 -- # (( file <= files )) 00:34:13.414 23:47:32 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:13.414 23:47:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:13.414 23:47:32 -- nvmf/common.sh@544 -- # jq . 00:34:13.414 23:47:32 -- nvmf/common.sh@545 -- # IFS=, 00:34:13.414 23:47:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:13.414 "params": { 00:34:13.414 "name": "Nvme0", 00:34:13.414 "trtype": "tcp", 00:34:13.414 "traddr": "10.0.0.2", 00:34:13.414 "adrfam": "ipv4", 00:34:13.414 "trsvcid": "4420", 00:34:13.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:13.414 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:13.414 "hdgst": false, 00:34:13.414 "ddgst": false 00:34:13.414 }, 00:34:13.414 "method": "bdev_nvme_attach_controller" 00:34:13.414 }' 00:34:13.414 23:47:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:13.414 23:47:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:13.414 23:47:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:13.414 23:47:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:13.414 23:47:32 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:13.414 23:47:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:13.414 23:47:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:13.414 23:47:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:13.414 23:47:32 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:13.414 23:47:32 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:13.414 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:13.414 ... 00:34:13.414 fio-3.35 00:34:13.414 Starting 3 threads 00:34:13.414 EAL: No free 2048 kB hugepages reported on node 1 00:34:13.414 [2024-07-11 23:47:33.294487] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:13.414 [2024-07-11 23:47:33.294591] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:17.597 00:34:17.597 filename0: (groupid=0, jobs=1): err= 0: pid=404431: Thu Jul 11 23:47:38 2024 00:34:17.597 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(120MiB/5048msec) 00:34:17.597 slat (nsec): min=5182, max=41329, avg=14988.32, stdev=4713.60 00:34:17.597 clat (usec): min=5571, max=57942, avg=15685.15, stdev=14687.98 00:34:17.597 lat (usec): min=5583, max=57952, avg=15700.14, stdev=14687.90 00:34:17.597 clat percentiles (usec): 00:34:17.597 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 8029], 00:34:17.597 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[11076], 00:34:17.597 | 70.00th=[12125], 80.00th=[13304], 90.00th=[50070], 95.00th=[52167], 00:34:17.597 | 99.00th=[54264], 99.50th=[55837], 99.90th=[57934], 99.95th=[57934], 00:34:17.597 | 99.99th=[57934] 00:34:17.597 bw ( KiB/s): min=14592, max=32000, per=34.21%, avg=24550.40, stdev=5217.84, samples=10 00:34:17.597 iops : min= 114, max= 250, avg=191.80, stdev=40.76, samples=10 00:34:17.597 lat (msec) : 10=50.68%, 20=35.07%, 50=4.89%, 100=9.37% 00:34:17.597 cpu : usr=94.17%, sys=5.13%, ctx=57, majf=0, minf=149 00:34:17.597 IO depths : 1=5.0%, 2=95.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.597 issued rwts: total=961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.597 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:17.597 filename0: (groupid=0, jobs=1): err= 0: pid=404432: Thu Jul 11 23:47:38 2024 00:34:17.597 read: IOPS=164, BW=20.6MiB/s (21.6MB/s)(103MiB/5006msec) 00:34:17.597 slat (nsec): min=4747, max=40664, avg=16453.94, stdev=4104.25 00:34:17.597 clat (usec): min=6400, max=94393, avg=18178.34, stdev=15481.56 00:34:17.597 lat (usec): min=6412, max=94415, avg=18194.80, stdev=15481.48 00:34:17.597 clat percentiles (usec): 00:34:17.597 | 1.00th=[ 7046], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[10028], 00:34:17.597 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12125], 60.00th=[13304], 00:34:17.597 | 70.00th=[14615], 80.00th=[16581], 90.00th=[51643], 95.00th=[53740], 00:34:17.597 | 99.00th=[58983], 99.50th=[61080], 99.90th=[94897], 99.95th=[94897], 00:34:17.597 | 99.99th=[94897] 00:34:17.597 bw ( KiB/s): min=14080, max=30976, per=29.33%, avg=21048.90, stdev=5267.85, samples=10 00:34:17.597 iops : min= 110, max= 242, avg=164.40, stdev=41.08, samples=10 00:34:17.597 lat (msec) : 10=19.88%, 20=65.09%, 50=1.45%, 100=13.58% 00:34:17.597 cpu : usr=94.39%, sys=4.92%, ctx=35, majf=0, minf=136 00:34:17.597 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.597 issued rwts: total=825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.597 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:17.597 filename0: (groupid=0, jobs=1): err= 0: pid=404433: Thu Jul 11 23:47:38 2024 00:34:17.597 read: IOPS=206, BW=25.9MiB/s (27.1MB/s)(131MiB/5045msec) 00:34:17.597 slat (usec): min=5, max=118, avg=15.38, stdev= 5.61 00:34:17.597 clat (usec): min=5457, max=91416, avg=14389.64, stdev=13905.09 00:34:17.597 lat (usec): min=5470, max=91429, avg=14405.02, stdev=13905.07 00:34:17.597 clat percentiles (usec): 00:34:17.597 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 7177], 00:34:17.597 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10552], 00:34:17.597 | 70.00th=[11731], 80.00th=[13304], 90.00th=[49546], 95.00th=[53216], 00:34:17.597 | 99.00th=[56886], 99.50th=[57410], 99.90th=[59507], 99.95th=[91751], 00:34:17.597 | 99.99th=[91751] 00:34:17.597 bw ( KiB/s): min=16640, max=33024, per=37.21%, avg=26700.80, stdev=5633.36, samples=10 00:34:17.597 iops : min= 130, max= 258, avg=208.60, stdev=44.01, samples=10 00:34:17.597 lat (msec) : 10=54.98%, 20=34.10%, 50=1.72%, 100=9.20% 00:34:17.597 cpu : usr=94.94%, sys=4.52%, ctx=12, majf=0, minf=221 00:34:17.597 IO depths : 1=2.6%, 2=97.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.597 issued rwts: total=1044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.597 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:17.597 00:34:17.597 Run status group 0 (all jobs): 00:34:17.597 READ: bw=70.1MiB/s (73.5MB/s), 20.6MiB/s-25.9MiB/s (21.6MB/s-27.1MB/s), io=354MiB (371MB), run=5006-5048msec 00:34:17.857 23:47:38 -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:17.857 23:47:38 -- target/dif.sh@43 -- # local sub 00:34:17.857 23:47:38 -- target/dif.sh@45 -- # for sub in "$@" 00:34:17.857 23:47:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:17.857 23:47:38 -- target/dif.sh@36 -- # local sub_id=0 00:34:17.857 23:47:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:17.857 23:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.857 23:47:38 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 23:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.857 23:47:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:17.857 23:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.857 23:47:38 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 23:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.857 23:47:38 -- target/dif.sh@109 -- # NULL_DIF=2 00:34:17.857 23:47:38 -- target/dif.sh@109 -- # bs=4k 00:34:17.857 23:47:38 -- target/dif.sh@109 -- # numjobs=8 00:34:17.857 23:47:38 -- target/dif.sh@109 -- # iodepth=16 00:34:17.857 23:47:38 -- target/dif.sh@109 -- # runtime= 00:34:17.857 23:47:38 -- target/dif.sh@109 -- # files=2 00:34:17.857 23:47:38 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:17.857 23:47:38 -- target/dif.sh@28 -- # local sub 00:34:17.857 23:47:38 -- target/dif.sh@30 -- # for sub in "$@" 00:34:17.857 23:47:38 -- target/dif.sh@31 -- # create_subsystem 0 00:34:17.857 23:47:38 -- target/dif.sh@18 -- # local sub_id=0 00:34:17.857 23:47:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:17.857 23:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.857 23:47:38 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 bdev_null0 00:34:17.857 23:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.857 23:47:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:17.857 23:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.857 23:47:38 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 23:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.857 23:47:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:17.857 23:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.857 23:47:38 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 23:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.857 23:47:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:17.857 23:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.857 23:47:38 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 [2024-07-11 23:47:38.727076] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.857 23:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.857 23:47:38 -- target/dif.sh@30 -- # for sub in "$@" 00:34:17.857 23:47:38 -- target/dif.sh@31 -- # create_subsystem 1 00:34:17.857 23:47:38 -- target/dif.sh@18 -- # local sub_id=1 00:34:17.857 23:47:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:17.857 23:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.857 23:47:38 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 bdev_null1 00:34:17.857 23:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.857 23:47:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:17.857 23:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.857 23:47:38 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 23:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.857 23:47:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:17.857 23:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.857 23:47:38 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 23:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.857 23:47:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.857 23:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.857 23:47:38 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 23:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.857 23:47:38 -- target/dif.sh@30 -- # for sub in "$@" 00:34:17.857 23:47:38 -- target/dif.sh@31 -- # create_subsystem 2 00:34:17.857 23:47:38 -- target/dif.sh@18 -- # local sub_id=2 00:34:17.857 23:47:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:17.857 23:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.857 23:47:38 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 bdev_null2 00:34:17.857 23:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.857 23:47:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:17.857 23:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.857 23:47:38 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 23:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.857 23:47:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:17.857 23:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.857 23:47:38 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 23:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.857 23:47:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:17.857 23:47:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.857 23:47:38 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 23:47:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.857 23:47:38 -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:17.857 23:47:38 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:17.857 23:47:38 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:17.857 23:47:38 -- nvmf/common.sh@520 -- # config=() 00:34:17.857 23:47:38 -- nvmf/common.sh@520 -- # local subsystem config 00:34:17.857 23:47:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:17.857 23:47:38 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.857 23:47:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:17.857 { 00:34:17.857 "params": { 00:34:17.857 "name": "Nvme$subsystem", 00:34:17.857 "trtype": "$TEST_TRANSPORT", 00:34:17.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.857 "adrfam": "ipv4", 00:34:17.857 "trsvcid": "$NVMF_PORT", 00:34:17.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.857 "hdgst": ${hdgst:-false}, 00:34:17.857 "ddgst": ${ddgst:-false} 00:34:17.857 }, 00:34:17.857 "method": "bdev_nvme_attach_controller" 00:34:17.857 } 00:34:17.857 EOF 00:34:17.857 )") 00:34:17.857 23:47:38 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.857 23:47:38 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:17.857 23:47:38 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:17.857 23:47:38 -- target/dif.sh@82 -- # gen_fio_conf 00:34:17.857 23:47:38 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:17.857 23:47:38 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:17.857 23:47:38 -- target/dif.sh@54 -- # local file 00:34:17.857 23:47:38 -- common/autotest_common.sh@1320 -- # shift 00:34:17.857 23:47:38 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:17.857 23:47:38 -- target/dif.sh@56 -- # cat 00:34:17.857 23:47:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:17.857 23:47:38 -- nvmf/common.sh@542 -- # cat 00:34:17.857 23:47:38 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:17.857 23:47:38 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:17.857 23:47:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:17.857 23:47:38 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:17.857 23:47:38 -- target/dif.sh@72 -- # (( file <= files )) 00:34:17.857 23:47:38 -- target/dif.sh@73 -- # cat 00:34:17.857 23:47:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:17.857 23:47:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:17.857 { 00:34:17.857 "params": { 00:34:17.857 "name": "Nvme$subsystem", 00:34:17.857 "trtype": "$TEST_TRANSPORT", 00:34:17.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.857 "adrfam": "ipv4", 00:34:17.857 "trsvcid": "$NVMF_PORT", 00:34:17.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.857 "hdgst": ${hdgst:-false}, 00:34:17.857 "ddgst": ${ddgst:-false} 00:34:17.857 }, 00:34:17.857 "method": "bdev_nvme_attach_controller" 00:34:17.857 } 00:34:17.857 EOF 00:34:17.857 )") 00:34:17.858 23:47:38 -- nvmf/common.sh@542 -- # cat 00:34:17.858 23:47:38 -- target/dif.sh@72 -- # (( file++ )) 00:34:17.858 23:47:38 -- target/dif.sh@72 -- # (( file <= files )) 00:34:17.858 23:47:38 -- target/dif.sh@73 -- # cat 00:34:17.858 23:47:38 -- target/dif.sh@72 -- # (( file++ )) 00:34:17.858 23:47:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:17.858 23:47:38 -- target/dif.sh@72 -- # (( file <= files )) 00:34:17.858 23:47:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:17.858 { 00:34:17.858 "params": { 00:34:17.858 "name": "Nvme$subsystem", 00:34:17.858 "trtype": "$TEST_TRANSPORT", 00:34:17.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.858 "adrfam": "ipv4", 00:34:17.858 "trsvcid": "$NVMF_PORT", 00:34:17.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.858 "hdgst": ${hdgst:-false}, 00:34:17.858 "ddgst": ${ddgst:-false} 00:34:17.858 }, 00:34:17.858 "method": "bdev_nvme_attach_controller" 00:34:17.858 } 00:34:17.858 EOF 00:34:17.858 )") 00:34:18.125 23:47:38 -- nvmf/common.sh@542 -- # cat 00:34:18.125 23:47:38 -- nvmf/common.sh@544 -- # jq . 00:34:18.125 23:47:38 -- nvmf/common.sh@545 -- # IFS=, 00:34:18.125 23:47:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:18.125 "params": { 00:34:18.125 "name": "Nvme0", 00:34:18.125 "trtype": "tcp", 00:34:18.125 "traddr": "10.0.0.2", 00:34:18.125 "adrfam": "ipv4", 00:34:18.125 "trsvcid": "4420", 00:34:18.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:18.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:18.125 "hdgst": false, 00:34:18.125 "ddgst": false 00:34:18.125 }, 00:34:18.125 "method": "bdev_nvme_attach_controller" 00:34:18.125 },{ 00:34:18.125 "params": { 00:34:18.125 "name": "Nvme1", 00:34:18.125 "trtype": "tcp", 00:34:18.125 "traddr": "10.0.0.2", 00:34:18.125 "adrfam": "ipv4", 00:34:18.125 "trsvcid": "4420", 00:34:18.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:18.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:18.125 "hdgst": false, 00:34:18.125 "ddgst": false 00:34:18.125 }, 00:34:18.125 "method": "bdev_nvme_attach_controller" 00:34:18.125 },{ 00:34:18.125 "params": { 00:34:18.125 "name": "Nvme2", 00:34:18.125 "trtype": "tcp", 00:34:18.125 "traddr": "10.0.0.2", 00:34:18.125 "adrfam": "ipv4", 00:34:18.125 "trsvcid": "4420", 00:34:18.125 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:18.126 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:18.126 "hdgst": false, 00:34:18.126 "ddgst": false 00:34:18.126 }, 00:34:18.126 "method": "bdev_nvme_attach_controller" 00:34:18.126 }' 00:34:18.126 23:47:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:18.126 23:47:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:18.126 23:47:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:18.126 23:47:38 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.126 23:47:38 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:18.126 23:47:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:18.126 23:47:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:18.126 23:47:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:18.126 23:47:38 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:18.126 23:47:38 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.384 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:18.384 ... 00:34:18.384 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:18.384 ... 00:34:18.384 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:18.384 ... 00:34:18.384 fio-3.35 00:34:18.384 Starting 24 threads 00:34:18.384 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.949 [2024-07-11 23:47:39.800742] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:18.949 [2024-07-11 23:47:39.800828] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:31.188 00:34:31.188 filename0: (groupid=0, jobs=1): err= 0: pid=405315: Thu Jul 11 23:47:50 2024 00:34:31.188 read: IOPS=48, BW=195KiB/s (200kB/s)(1984KiB/10181msec) 00:34:31.188 slat (nsec): min=8139, max=84689, avg=24694.21, stdev=16831.11 00:34:31.188 clat (msec): min=45, max=549, avg=328.20, stdev=94.85 00:34:31.188 lat (msec): min=45, max=549, avg=328.23, stdev=94.85 00:34:31.188 clat percentiles (msec): 00:34:31.188 | 1.00th=[ 46], 5.00th=[ 199], 10.00th=[ 207], 20.00th=[ 243], 00:34:31.188 | 30.00th=[ 284], 40.00th=[ 326], 50.00th=[ 363], 60.00th=[ 380], 00:34:31.188 | 70.00th=[ 388], 80.00th=[ 405], 90.00th=[ 422], 95.00th=[ 447], 00:34:31.188 | 99.00th=[ 472], 99.50th=[ 542], 99.90th=[ 550], 99.95th=[ 550], 00:34:31.188 | 99.99th=[ 550] 00:34:31.188 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=192.00, stdev=84.82, samples=20 00:34:31.188 iops : min= 32, max= 96, avg=48.00, stdev=21.21, samples=20 00:34:31.188 lat (msec) : 50=3.23%, 250=25.81%, 500=70.16%, 750=0.81% 00:34:31.188 cpu : usr=98.58%, sys=1.04%, ctx=16, majf=0, minf=46 00:34:31.188 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:34:31.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.188 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.188 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.188 filename0: (groupid=0, jobs=1): err= 0: pid=405316: Thu Jul 11 23:47:50 2024 00:34:31.188 read: IOPS=48, BW=195KiB/s (199kB/s)(1984KiB/10185msec) 00:34:31.188 slat (usec): min=8, max=104, avg=28.52, stdev=10.33 00:34:31.188 clat (msec): min=152, max=549, avg=328.19, stdev=89.53 00:34:31.188 lat (msec): min=152, max=549, avg=328.22, stdev=89.53 00:34:31.188 clat percentiles (msec): 00:34:31.188 | 1.00th=[ 153], 5.00th=[ 184], 10.00th=[ 209], 20.00th=[ 230], 00:34:31.188 | 30.00th=[ 245], 40.00th=[ 342], 50.00th=[ 363], 60.00th=[ 380], 00:34:31.188 | 70.00th=[ 388], 80.00th=[ 405], 90.00th=[ 422], 95.00th=[ 447], 00:34:31.188 | 99.00th=[ 535], 99.50th=[ 542], 99.90th=[ 550], 99.95th=[ 550], 00:34:31.188 | 99.99th=[ 550] 00:34:31.188 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=192.00, stdev=75.23, samples=20 00:34:31.188 iops : min= 32, max= 96, avg=48.00, stdev=18.81, samples=20 00:34:31.188 lat (msec) : 250=32.26%, 500=66.53%, 750=1.21% 00:34:31.188 cpu : usr=98.35%, sys=1.14%, ctx=23, majf=0, minf=30 00:34:31.188 IO depths : 1=2.8%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:34:31.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.188 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.188 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.188 filename0: (groupid=0, jobs=1): err= 0: pid=405317: Thu Jul 11 23:47:50 2024 00:34:31.188 read: IOPS=63, BW=253KiB/s (259kB/s)(2576KiB/10200msec) 00:34:31.188 slat (usec): min=4, max=205, avg=21.75, stdev=22.97 00:34:31.188 clat (msec): min=16, max=422, avg=252.70, stdev=81.95 00:34:31.188 lat (msec): min=16, max=422, avg=252.72, stdev=81.94 00:34:31.188 clat percentiles (msec): 00:34:31.188 | 1.00th=[ 17], 5.00th=[ 116], 10.00th=[ 142], 20.00th=[ 211], 00:34:31.188 | 30.00th=[ 230], 40.00th=[ 236], 50.00th=[ 255], 60.00th=[ 275], 00:34:31.188 | 70.00th=[ 292], 80.00th=[ 313], 90.00th=[ 342], 95.00th=[ 384], 00:34:31.188 | 99.00th=[ 422], 99.50th=[ 422], 99.90th=[ 422], 99.95th=[ 422], 00:34:31.188 | 99.99th=[ 422] 00:34:31.188 bw ( KiB/s): min= 128, max= 513, per=5.27%, avg=251.25, stdev=85.48, samples=20 00:34:31.188 iops : min= 32, max= 128, avg=62.80, stdev=21.33, samples=20 00:34:31.188 lat (msec) : 20=2.48%, 50=2.48%, 250=42.55%, 500=52.48% 00:34:31.188 cpu : usr=98.59%, sys=1.02%, ctx=15, majf=0, minf=22 00:34:31.188 IO depths : 1=1.7%, 2=4.5%, 4=14.3%, 8=68.5%, 16=11.0%, 32=0.0%, >=64=0.0% 00:34:31.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.188 complete : 0=0.0%, 4=91.1%, 8=3.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.188 issued rwts: total=644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.188 filename0: (groupid=0, jobs=1): err= 0: pid=405318: Thu Jul 11 23:47:50 2024 00:34:31.188 read: IOPS=47, BW=189KiB/s (193kB/s)(1920KiB/10170msec) 00:34:31.188 slat (usec): min=8, max=146, avg=69.57, stdev=36.79 00:34:31.188 clat (msec): min=196, max=567, avg=338.41, stdev=79.44 00:34:31.188 lat (msec): min=196, max=567, avg=338.48, stdev=79.45 00:34:31.188 clat percentiles (msec): 00:34:31.188 | 1.00th=[ 207], 5.00th=[ 211], 10.00th=[ 215], 20.00th=[ 234], 00:34:31.188 | 30.00th=[ 279], 40.00th=[ 342], 50.00th=[ 368], 60.00th=[ 384], 00:34:31.188 | 70.00th=[ 388], 80.00th=[ 401], 90.00th=[ 405], 95.00th=[ 447], 00:34:31.188 | 99.00th=[ 550], 99.50th=[ 558], 99.90th=[ 567], 99.95th=[ 567], 00:34:31.188 | 99.99th=[ 567] 00:34:31.188 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=185.60, stdev=74.94, samples=20 00:34:31.188 iops : min= 32, max= 96, avg=46.40, stdev=18.73, samples=20 00:34:31.188 lat (msec) : 250=21.67%, 500=76.25%, 750=2.08% 00:34:31.188 cpu : usr=98.63%, sys=0.96%, ctx=17, majf=0, minf=36 00:34:31.188 IO depths : 1=4.0%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:34:31.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.188 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.188 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.188 filename0: (groupid=0, jobs=1): err= 0: pid=405319: Thu Jul 11 23:47:50 2024 00:34:31.188 read: IOPS=45, BW=183KiB/s (187kB/s)(1856KiB/10168msec) 00:34:31.188 slat (usec): min=6, max=132, avg=40.75, stdev=32.20 00:34:31.188 clat (msec): min=207, max=465, avg=350.10, stdev=78.91 00:34:31.188 lat (msec): min=207, max=465, avg=350.14, stdev=78.89 00:34:31.188 clat percentiles (msec): 00:34:31.188 | 1.00th=[ 207], 5.00th=[ 209], 10.00th=[ 215], 20.00th=[ 243], 00:34:31.188 | 30.00th=[ 321], 40.00th=[ 363], 50.00th=[ 388], 60.00th=[ 393], 00:34:31.188 | 70.00th=[ 401], 80.00th=[ 422], 90.00th=[ 435], 95.00th=[ 443], 00:34:31.188 | 99.00th=[ 447], 99.50th=[ 451], 99.90th=[ 464], 99.95th=[ 464], 00:34:31.188 | 99.99th=[ 464] 00:34:31.188 bw ( KiB/s): min= 128, max= 384, per=3.76%, avg=179.20, stdev=76.58, samples=20 00:34:31.188 iops : min= 32, max= 96, avg=44.80, stdev=19.14, samples=20 00:34:31.188 lat (msec) : 250=21.12%, 500=78.88% 00:34:31.188 cpu : usr=98.85%, sys=0.74%, ctx=9, majf=0, minf=32 00:34:31.188 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:31.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.188 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.188 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.188 filename0: (groupid=0, jobs=1): err= 0: pid=405320: Thu Jul 11 23:47:50 2024 00:34:31.188 read: IOPS=45, BW=183KiB/s (187kB/s)(1856KiB/10147msec) 00:34:31.188 slat (nsec): min=8712, max=99731, avg=27874.71, stdev=15640.86 00:34:31.188 clat (msec): min=140, max=719, avg=349.64, stdev=106.70 00:34:31.188 lat (msec): min=140, max=719, avg=349.66, stdev=106.70 00:34:31.188 clat percentiles (msec): 00:34:31.188 | 1.00th=[ 209], 5.00th=[ 209], 10.00th=[ 215], 20.00th=[ 236], 00:34:31.188 | 30.00th=[ 284], 40.00th=[ 359], 50.00th=[ 372], 60.00th=[ 388], 00:34:31.188 | 70.00th=[ 393], 80.00th=[ 401], 90.00th=[ 422], 95.00th=[ 435], 00:34:31.188 | 99.00th=[ 718], 99.50th=[ 718], 99.90th=[ 718], 99.95th=[ 718], 00:34:31.188 | 99.99th=[ 718] 00:34:31.188 bw ( KiB/s): min= 128, max= 384, per=3.94%, avg=188.63, stdev=77.03, samples=19 00:34:31.188 iops : min= 32, max= 96, avg=47.16, stdev=19.26, samples=19 00:34:31.188 lat (msec) : 250=25.00%, 500=70.69%, 750=4.31% 00:34:31.188 cpu : usr=98.39%, sys=1.01%, ctx=40, majf=0, minf=32 00:34:31.188 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:34:31.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.188 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.188 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.188 filename0: (groupid=0, jobs=1): err= 0: pid=405321: Thu Jul 11 23:47:50 2024 00:34:31.188 read: IOPS=47, BW=189KiB/s (193kB/s)(1920KiB/10165msec) 00:34:31.188 slat (usec): min=8, max=103, avg=28.71, stdev=18.33 00:34:31.188 clat (msec): min=183, max=562, avg=338.48, stdev=79.96 00:34:31.188 lat (msec): min=183, max=562, avg=338.51, stdev=79.95 00:34:31.188 clat percentiles (msec): 00:34:31.188 | 1.00th=[ 207], 5.00th=[ 209], 10.00th=[ 215], 20.00th=[ 243], 00:34:31.188 | 30.00th=[ 279], 40.00th=[ 326], 50.00th=[ 372], 60.00th=[ 384], 00:34:31.188 | 70.00th=[ 388], 80.00th=[ 401], 90.00th=[ 405], 95.00th=[ 451], 00:34:31.188 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 567], 99.95th=[ 567], 00:34:31.188 | 99.99th=[ 567] 00:34:31.188 bw ( KiB/s): min= 112, max= 384, per=3.88%, avg=185.60, stdev=76.36, samples=20 00:34:31.188 iops : min= 28, max= 96, avg=46.40, stdev=19.09, samples=20 00:34:31.188 lat (msec) : 250=21.67%, 500=76.67%, 750=1.67% 00:34:31.188 cpu : usr=98.77%, sys=0.84%, ctx=145, majf=0, minf=37 00:34:31.188 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:34:31.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.189 filename0: (groupid=0, jobs=1): err= 0: pid=405322: Thu Jul 11 23:47:50 2024 00:34:31.189 read: IOPS=48, BW=194KiB/s (199kB/s)(1976KiB/10181msec) 00:34:31.189 slat (usec): min=8, max=214, avg=35.29, stdev=18.95 00:34:31.189 clat (msec): min=152, max=564, avg=328.72, stdev=87.04 00:34:31.189 lat (msec): min=152, max=564, avg=328.76, stdev=87.03 00:34:31.189 clat percentiles (msec): 00:34:31.189 | 1.00th=[ 153], 5.00th=[ 184], 10.00th=[ 209], 20.00th=[ 230], 00:34:31.189 | 30.00th=[ 259], 40.00th=[ 342], 50.00th=[ 368], 60.00th=[ 380], 00:34:31.189 | 70.00th=[ 388], 80.00th=[ 405], 90.00th=[ 418], 95.00th=[ 426], 00:34:31.189 | 99.00th=[ 451], 99.50th=[ 567], 99.90th=[ 567], 99.95th=[ 567], 00:34:31.189 | 99.99th=[ 567] 00:34:31.189 bw ( KiB/s): min= 112, max= 384, per=4.01%, avg=192.00, stdev=74.14, samples=20 00:34:31.189 iops : min= 28, max= 96, avg=48.00, stdev=18.54, samples=20 00:34:31.189 lat (msec) : 250=29.15%, 500=70.04%, 750=0.81% 00:34:31.189 cpu : usr=97.94%, sys=1.39%, ctx=61, majf=0, minf=23 00:34:31.189 IO depths : 1=3.0%, 2=9.3%, 4=25.1%, 8=53.2%, 16=9.3%, 32=0.0%, >=64=0.0% 00:34:31.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 issued rwts: total=494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.189 filename1: (groupid=0, jobs=1): err= 0: pid=405323: Thu Jul 11 23:47:50 2024 00:34:31.189 read: IOPS=64, BW=260KiB/s (266kB/s)(2648KiB/10200msec) 00:34:31.189 slat (usec): min=4, max=117, avg=21.51, stdev=20.87 00:34:31.189 clat (msec): min=17, max=610, avg=246.15, stdev=78.77 00:34:31.189 lat (msec): min=17, max=610, avg=246.17, stdev=78.76 00:34:31.189 clat percentiles (msec): 00:34:31.189 | 1.00th=[ 18], 5.00th=[ 116], 10.00th=[ 184], 20.00th=[ 220], 00:34:31.189 | 30.00th=[ 230], 40.00th=[ 236], 50.00th=[ 243], 60.00th=[ 259], 00:34:31.189 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 313], 95.00th=[ 321], 00:34:31.189 | 99.00th=[ 584], 99.50th=[ 609], 99.90th=[ 609], 99.95th=[ 609], 00:34:31.189 | 99.99th=[ 609] 00:34:31.189 bw ( KiB/s): min= 128, max= 512, per=5.41%, avg=258.40, stdev=85.34, samples=20 00:34:31.189 iops : min= 32, max= 128, avg=64.60, stdev=21.34, samples=20 00:34:31.189 lat (msec) : 20=3.47%, 50=1.36%, 250=51.06%, 500=42.60%, 750=1.51% 00:34:31.189 cpu : usr=98.41%, sys=1.07%, ctx=19, majf=0, minf=23 00:34:31.189 IO depths : 1=1.1%, 2=2.9%, 4=11.5%, 8=73.1%, 16=11.5%, 32=0.0%, >=64=0.0% 00:34:31.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 complete : 0=0.0%, 4=90.3%, 8=4.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 issued rwts: total=662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.189 filename1: (groupid=0, jobs=1): err= 0: pid=405324: Thu Jul 11 23:47:50 2024 00:34:31.189 read: IOPS=48, BW=194KiB/s (199kB/s)(1976KiB/10181msec) 00:34:31.189 slat (usec): min=8, max=144, avg=69.68, stdev=38.88 00:34:31.189 clat (msec): min=141, max=583, avg=328.53, stdev=96.68 00:34:31.189 lat (msec): min=141, max=583, avg=328.60, stdev=96.70 00:34:31.189 clat percentiles (msec): 00:34:31.189 | 1.00th=[ 142], 5.00th=[ 153], 10.00th=[ 201], 20.00th=[ 230], 00:34:31.189 | 30.00th=[ 259], 40.00th=[ 326], 50.00th=[ 363], 60.00th=[ 384], 00:34:31.189 | 70.00th=[ 397], 80.00th=[ 414], 90.00th=[ 426], 95.00th=[ 447], 00:34:31.189 | 99.00th=[ 518], 99.50th=[ 523], 99.90th=[ 584], 99.95th=[ 584], 00:34:31.189 | 99.99th=[ 584] 00:34:31.189 bw ( KiB/s): min= 112, max= 384, per=4.01%, avg=191.20, stdev=74.77, samples=20 00:34:31.189 iops : min= 28, max= 96, avg=47.80, stdev=18.69, samples=20 00:34:31.189 lat (msec) : 250=29.15%, 500=68.83%, 750=2.02% 00:34:31.189 cpu : usr=98.46%, sys=1.05%, ctx=36, majf=0, minf=38 00:34:31.189 IO depths : 1=3.0%, 2=9.3%, 4=25.1%, 8=53.2%, 16=9.3%, 32=0.0%, >=64=0.0% 00:34:31.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 issued rwts: total=494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.189 filename1: (groupid=0, jobs=1): err= 0: pid=405325: Thu Jul 11 23:47:50 2024 00:34:31.189 read: IOPS=48, BW=194KiB/s (199kB/s)(1976KiB/10181msec) 00:34:31.189 slat (usec): min=8, max=164, avg=64.33, stdev=39.40 00:34:31.189 clat (msec): min=153, max=622, avg=328.72, stdev=89.33 00:34:31.189 lat (msec): min=153, max=622, avg=328.78, stdev=89.34 00:34:31.189 clat percentiles (msec): 00:34:31.189 | 1.00th=[ 155], 5.00th=[ 184], 10.00th=[ 209], 20.00th=[ 230], 00:34:31.189 | 30.00th=[ 259], 40.00th=[ 313], 50.00th=[ 368], 60.00th=[ 380], 00:34:31.189 | 70.00th=[ 388], 80.00th=[ 401], 90.00th=[ 405], 95.00th=[ 447], 00:34:31.189 | 99.00th=[ 542], 99.50th=[ 558], 99.90th=[ 625], 99.95th=[ 625], 00:34:31.189 | 99.99th=[ 625] 00:34:31.189 bw ( KiB/s): min= 112, max= 384, per=4.01%, avg=191.20, stdev=77.43, samples=20 00:34:31.189 iops : min= 28, max= 96, avg=47.80, stdev=19.36, samples=20 00:34:31.189 lat (msec) : 250=27.53%, 500=70.85%, 750=1.62% 00:34:31.189 cpu : usr=98.88%, sys=0.71%, ctx=13, majf=0, minf=27 00:34:31.189 IO depths : 1=3.6%, 2=9.7%, 4=24.5%, 8=53.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:34:31.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 issued rwts: total=494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.189 filename1: (groupid=0, jobs=1): err= 0: pid=405326: Thu Jul 11 23:47:50 2024 00:34:31.189 read: IOPS=47, BW=189KiB/s (193kB/s)(1920KiB/10182msec) 00:34:31.189 slat (nsec): min=8729, max=91188, avg=40376.78, stdev=19875.38 00:34:31.189 clat (msec): min=206, max=549, avg=338.93, stdev=79.34 00:34:31.189 lat (msec): min=206, max=549, avg=338.97, stdev=79.33 00:34:31.189 clat percentiles (msec): 00:34:31.189 | 1.00th=[ 207], 5.00th=[ 211], 10.00th=[ 215], 20.00th=[ 234], 00:34:31.189 | 30.00th=[ 275], 40.00th=[ 342], 50.00th=[ 368], 60.00th=[ 380], 00:34:31.189 | 70.00th=[ 388], 80.00th=[ 405], 90.00th=[ 422], 95.00th=[ 451], 00:34:31.189 | 99.00th=[ 472], 99.50th=[ 535], 99.90th=[ 550], 99.95th=[ 550], 00:34:31.189 | 99.99th=[ 550] 00:34:31.189 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=185.60, stdev=74.94, samples=20 00:34:31.189 iops : min= 32, max= 96, avg=46.40, stdev=18.73, samples=20 00:34:31.189 lat (msec) : 250=23.75%, 500=75.42%, 750=0.83% 00:34:31.189 cpu : usr=98.92%, sys=0.69%, ctx=13, majf=0, minf=33 00:34:31.189 IO depths : 1=4.0%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:34:31.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.189 filename1: (groupid=0, jobs=1): err= 0: pid=405327: Thu Jul 11 23:47:50 2024 00:34:31.189 read: IOPS=45, BW=183KiB/s (187kB/s)(1856KiB/10137msec) 00:34:31.189 slat (usec): min=8, max=143, avg=35.92, stdev=24.75 00:34:31.189 clat (msec): min=140, max=720, avg=349.26, stdev=109.92 00:34:31.189 lat (msec): min=140, max=720, avg=349.29, stdev=109.91 00:34:31.189 clat percentiles (msec): 00:34:31.189 | 1.00th=[ 184], 5.00th=[ 207], 10.00th=[ 211], 20.00th=[ 230], 00:34:31.189 | 30.00th=[ 284], 40.00th=[ 326], 50.00th=[ 372], 60.00th=[ 393], 00:34:31.189 | 70.00th=[ 397], 80.00th=[ 405], 90.00th=[ 426], 95.00th=[ 498], 00:34:31.189 | 99.00th=[ 718], 99.50th=[ 718], 99.90th=[ 718], 99.95th=[ 718], 00:34:31.189 | 99.99th=[ 718] 00:34:31.189 bw ( KiB/s): min= 16, max= 384, per=3.76%, avg=179.20, stdev=83.64, samples=20 00:34:31.189 iops : min= 4, max= 96, avg=44.80, stdev=20.91, samples=20 00:34:31.189 lat (msec) : 250=25.00%, 500=70.26%, 750=4.74% 00:34:31.189 cpu : usr=98.70%, sys=0.91%, ctx=22, majf=0, minf=42 00:34:31.189 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:34:31.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.189 filename1: (groupid=0, jobs=1): err= 0: pid=405328: Thu Jul 11 23:47:50 2024 00:34:31.189 read: IOPS=48, BW=195KiB/s (199kB/s)(1984KiB/10185msec) 00:34:31.189 slat (nsec): min=12103, max=88901, avg=23807.60, stdev=10583.59 00:34:31.189 clat (msec): min=151, max=448, avg=328.22, stdev=88.06 00:34:31.189 lat (msec): min=151, max=448, avg=328.24, stdev=88.06 00:34:31.189 clat percentiles (msec): 00:34:31.189 | 1.00th=[ 153], 5.00th=[ 184], 10.00th=[ 209], 20.00th=[ 222], 00:34:31.189 | 30.00th=[ 243], 40.00th=[ 342], 50.00th=[ 372], 60.00th=[ 380], 00:34:31.189 | 70.00th=[ 388], 80.00th=[ 405], 90.00th=[ 418], 95.00th=[ 426], 00:34:31.189 | 99.00th=[ 443], 99.50th=[ 447], 99.90th=[ 447], 99.95th=[ 447], 00:34:31.189 | 99.99th=[ 447] 00:34:31.189 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=192.00, stdev=76.47, samples=20 00:34:31.189 iops : min= 32, max= 96, avg=48.00, stdev=19.12, samples=20 00:34:31.189 lat (msec) : 250=31.85%, 500=68.15% 00:34:31.189 cpu : usr=98.20%, sys=1.25%, ctx=14, majf=0, minf=48 00:34:31.189 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:34:31.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.189 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.189 filename1: (groupid=0, jobs=1): err= 0: pid=405329: Thu Jul 11 23:47:50 2024 00:34:31.189 read: IOPS=45, BW=183KiB/s (187kB/s)(1856KiB/10169msec) 00:34:31.189 slat (usec): min=8, max=117, avg=45.10, stdev=27.26 00:34:31.189 clat (msec): min=170, max=561, avg=350.20, stdev=79.41 00:34:31.189 lat (msec): min=170, max=561, avg=350.24, stdev=79.40 00:34:31.189 clat percentiles (msec): 00:34:31.189 | 1.00th=[ 171], 5.00th=[ 209], 10.00th=[ 213], 20.00th=[ 243], 00:34:31.189 | 30.00th=[ 347], 40.00th=[ 368], 50.00th=[ 380], 60.00th=[ 388], 00:34:31.189 | 70.00th=[ 393], 80.00th=[ 405], 90.00th=[ 422], 95.00th=[ 451], 00:34:31.189 | 99.00th=[ 464], 99.50th=[ 510], 99.90th=[ 558], 99.95th=[ 558], 00:34:31.189 | 99.99th=[ 558] 00:34:31.189 bw ( KiB/s): min= 128, max= 368, per=3.76%, avg=179.20, stdev=70.34, samples=20 00:34:31.189 iops : min= 32, max= 92, avg=44.80, stdev=17.58, samples=20 00:34:31.189 lat (msec) : 250=22.41%, 500=76.72%, 750=0.86% 00:34:31.189 cpu : usr=97.81%, sys=1.30%, ctx=121, majf=0, minf=28 00:34:31.190 IO depths : 1=4.3%, 2=10.3%, 4=24.4%, 8=52.8%, 16=8.2%, 32=0.0%, >=64=0.0% 00:34:31.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.190 filename1: (groupid=0, jobs=1): err= 0: pid=405330: Thu Jul 11 23:47:50 2024 00:34:31.190 read: IOPS=46, BW=188KiB/s (192kB/s)(1912KiB/10174msec) 00:34:31.190 slat (usec): min=8, max=127, avg=31.24, stdev=14.87 00:34:31.190 clat (msec): min=202, max=762, avg=340.06, stdev=86.23 00:34:31.190 lat (msec): min=202, max=762, avg=340.09, stdev=86.22 00:34:31.190 clat percentiles (msec): 00:34:31.190 | 1.00th=[ 209], 5.00th=[ 209], 10.00th=[ 215], 20.00th=[ 243], 00:34:31.190 | 30.00th=[ 268], 40.00th=[ 326], 50.00th=[ 372], 60.00th=[ 380], 00:34:31.190 | 70.00th=[ 388], 80.00th=[ 405], 90.00th=[ 430], 95.00th=[ 451], 00:34:31.190 | 99.00th=[ 518], 99.50th=[ 527], 99.90th=[ 760], 99.95th=[ 760], 00:34:31.190 | 99.99th=[ 760] 00:34:31.190 bw ( KiB/s): min= 112, max= 384, per=3.86%, avg=184.80, stdev=75.67, samples=20 00:34:31.190 iops : min= 28, max= 96, avg=46.20, stdev=18.92, samples=20 00:34:31.190 lat (msec) : 250=20.50%, 500=76.57%, 750=2.51%, 1000=0.42% 00:34:31.190 cpu : usr=98.47%, sys=0.95%, ctx=39, majf=0, minf=32 00:34:31.190 IO depths : 1=3.1%, 2=9.2%, 4=24.5%, 8=54.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:34:31.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 issued rwts: total=478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.190 filename2: (groupid=0, jobs=1): err= 0: pid=405331: Thu Jul 11 23:47:50 2024 00:34:31.190 read: IOPS=48, BW=195KiB/s (200kB/s)(1984KiB/10181msec) 00:34:31.190 slat (usec): min=8, max=101, avg=36.03, stdev=16.27 00:34:31.190 clat (msec): min=153, max=449, avg=328.09, stdev=84.64 00:34:31.190 lat (msec): min=153, max=449, avg=328.13, stdev=84.64 00:34:31.190 clat percentiles (msec): 00:34:31.190 | 1.00th=[ 155], 5.00th=[ 184], 10.00th=[ 209], 20.00th=[ 230], 00:34:31.190 | 30.00th=[ 259], 40.00th=[ 342], 50.00th=[ 368], 60.00th=[ 380], 00:34:31.190 | 70.00th=[ 388], 80.00th=[ 405], 90.00th=[ 418], 95.00th=[ 426], 00:34:31.190 | 99.00th=[ 451], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:34:31.190 | 99.99th=[ 451] 00:34:31.190 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=192.00, stdev=76.47, samples=20 00:34:31.190 iops : min= 32, max= 96, avg=48.00, stdev=19.12, samples=20 00:34:31.190 lat (msec) : 250=29.03%, 500=70.97% 00:34:31.190 cpu : usr=98.46%, sys=1.09%, ctx=32, majf=0, minf=23 00:34:31.190 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:34:31.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.190 filename2: (groupid=0, jobs=1): err= 0: pid=405332: Thu Jul 11 23:47:50 2024 00:34:31.190 read: IOPS=45, BW=183KiB/s (187kB/s)(1856KiB/10163msec) 00:34:31.190 slat (usec): min=8, max=132, avg=38.50, stdev=26.53 00:34:31.190 clat (msec): min=199, max=525, avg=350.12, stdev=79.36 00:34:31.190 lat (msec): min=199, max=525, avg=350.15, stdev=79.34 00:34:31.190 clat percentiles (msec): 00:34:31.190 | 1.00th=[ 207], 5.00th=[ 209], 10.00th=[ 226], 20.00th=[ 243], 00:34:31.190 | 30.00th=[ 317], 40.00th=[ 363], 50.00th=[ 388], 60.00th=[ 393], 00:34:31.190 | 70.00th=[ 401], 80.00th=[ 422], 90.00th=[ 435], 95.00th=[ 447], 00:34:31.190 | 99.00th=[ 447], 99.50th=[ 498], 99.90th=[ 527], 99.95th=[ 527], 00:34:31.190 | 99.99th=[ 527] 00:34:31.190 bw ( KiB/s): min= 128, max= 368, per=3.76%, avg=179.20, stdev=73.89, samples=20 00:34:31.190 iops : min= 32, max= 92, avg=44.80, stdev=18.47, samples=20 00:34:31.190 lat (msec) : 250=20.69%, 500=78.88%, 750=0.43% 00:34:31.190 cpu : usr=98.48%, sys=0.99%, ctx=26, majf=0, minf=32 00:34:31.190 IO depths : 1=4.1%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.4%, 32=0.0%, >=64=0.0% 00:34:31.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.190 filename2: (groupid=0, jobs=1): err= 0: pid=405333: Thu Jul 11 23:47:50 2024 00:34:31.190 read: IOPS=48, BW=194KiB/s (199kB/s)(1976KiB/10181msec) 00:34:31.190 slat (usec): min=8, max=110, avg=31.05, stdev=17.62 00:34:31.190 clat (msec): min=152, max=736, avg=329.16, stdev=92.96 00:34:31.190 lat (msec): min=152, max=736, avg=329.19, stdev=92.96 00:34:31.190 clat percentiles (msec): 00:34:31.190 | 1.00th=[ 153], 5.00th=[ 184], 10.00th=[ 209], 20.00th=[ 230], 00:34:31.190 | 30.00th=[ 245], 40.00th=[ 342], 50.00th=[ 363], 60.00th=[ 380], 00:34:31.190 | 70.00th=[ 388], 80.00th=[ 405], 90.00th=[ 422], 95.00th=[ 447], 00:34:31.190 | 99.00th=[ 535], 99.50th=[ 542], 99.90th=[ 735], 99.95th=[ 735], 00:34:31.190 | 99.99th=[ 735] 00:34:31.190 bw ( KiB/s): min= 112, max= 384, per=4.01%, avg=191.20, stdev=74.77, samples=20 00:34:31.190 iops : min= 28, max= 96, avg=47.80, stdev=18.69, samples=20 00:34:31.190 lat (msec) : 250=31.58%, 500=66.40%, 750=2.02% 00:34:31.190 cpu : usr=98.61%, sys=0.99%, ctx=16, majf=0, minf=27 00:34:31.190 IO depths : 1=3.4%, 2=9.7%, 4=25.1%, 8=52.8%, 16=8.9%, 32=0.0%, >=64=0.0% 00:34:31.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 issued rwts: total=494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.190 filename2: (groupid=0, jobs=1): err= 0: pid=405334: Thu Jul 11 23:47:50 2024 00:34:31.190 read: IOPS=53, BW=213KiB/s (218kB/s)(2176KiB/10201msec) 00:34:31.190 slat (usec): min=4, max=157, avg=66.61, stdev=41.57 00:34:31.190 clat (msec): min=13, max=599, avg=299.33, stdev=111.17 00:34:31.190 lat (msec): min=13, max=599, avg=299.40, stdev=111.20 00:34:31.190 clat percentiles (msec): 00:34:31.190 | 1.00th=[ 14], 5.00th=[ 39], 10.00th=[ 146], 20.00th=[ 209], 00:34:31.190 | 30.00th=[ 230], 40.00th=[ 317], 50.00th=[ 342], 60.00th=[ 368], 00:34:31.190 | 70.00th=[ 380], 80.00th=[ 393], 90.00th=[ 401], 95.00th=[ 405], 00:34:31.190 | 99.00th=[ 542], 99.50th=[ 550], 99.90th=[ 600], 99.95th=[ 600], 00:34:31.190 | 99.99th=[ 600] 00:34:31.190 bw ( KiB/s): min= 128, max= 512, per=4.43%, avg=211.20, stdev=103.25, samples=20 00:34:31.190 iops : min= 32, max= 128, avg=52.80, stdev=25.81, samples=20 00:34:31.190 lat (msec) : 20=2.94%, 50=2.94%, 250=30.51%, 500=62.50%, 750=1.10% 00:34:31.190 cpu : usr=98.64%, sys=0.94%, ctx=15, majf=0, minf=39 00:34:31.190 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:34:31.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.190 filename2: (groupid=0, jobs=1): err= 0: pid=405335: Thu Jul 11 23:47:50 2024 00:34:31.190 read: IOPS=64, BW=256KiB/s (263kB/s)(2616KiB/10199msec) 00:34:31.190 slat (usec): min=8, max=254, avg=22.25, stdev=20.89 00:34:31.190 clat (msec): min=15, max=556, avg=248.74, stdev=90.80 00:34:31.190 lat (msec): min=15, max=556, avg=248.76, stdev=90.80 00:34:31.190 clat percentiles (msec): 00:34:31.190 | 1.00th=[ 16], 5.00th=[ 68], 10.00th=[ 122], 20.00th=[ 199], 00:34:31.190 | 30.00th=[ 211], 40.00th=[ 234], 50.00th=[ 243], 60.00th=[ 288], 00:34:31.190 | 70.00th=[ 292], 80.00th=[ 313], 90.00th=[ 359], 95.00th=[ 397], 00:34:31.190 | 99.00th=[ 435], 99.50th=[ 493], 99.90th=[ 558], 99.95th=[ 558], 00:34:31.190 | 99.99th=[ 558] 00:34:31.190 bw ( KiB/s): min= 112, max= 512, per=5.35%, avg=255.20, stdev=81.33, samples=20 00:34:31.190 iops : min= 28, max= 128, avg=63.80, stdev=20.33, samples=20 00:34:31.190 lat (msec) : 20=2.45%, 50=2.45%, 100=2.45%, 250=48.32%, 500=44.04% 00:34:31.190 lat (msec) : 750=0.31% 00:34:31.190 cpu : usr=97.22%, sys=1.77%, ctx=51, majf=0, minf=54 00:34:31.190 IO depths : 1=3.2%, 2=8.4%, 4=22.0%, 8=57.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:34:31.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.190 filename2: (groupid=0, jobs=1): err= 0: pid=405336: Thu Jul 11 23:47:50 2024 00:34:31.190 read: IOPS=48, BW=195KiB/s (200kB/s)(1984KiB/10181msec) 00:34:31.190 slat (usec): min=6, max=155, avg=66.96, stdev=37.21 00:34:31.190 clat (msec): min=152, max=448, avg=327.83, stdev=85.68 00:34:31.190 lat (msec): min=152, max=448, avg=327.90, stdev=85.69 00:34:31.190 clat percentiles (msec): 00:34:31.190 | 1.00th=[ 153], 5.00th=[ 184], 10.00th=[ 209], 20.00th=[ 230], 00:34:31.190 | 30.00th=[ 259], 40.00th=[ 326], 50.00th=[ 372], 60.00th=[ 384], 00:34:31.190 | 70.00th=[ 388], 80.00th=[ 397], 90.00th=[ 414], 95.00th=[ 435], 00:34:31.190 | 99.00th=[ 447], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:34:31.190 | 99.99th=[ 451] 00:34:31.190 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=192.00, stdev=77.69, samples=20 00:34:31.190 iops : min= 32, max= 96, avg=48.00, stdev=19.42, samples=20 00:34:31.190 lat (msec) : 250=29.03%, 500=70.97% 00:34:31.190 cpu : usr=98.74%, sys=0.84%, ctx=21, majf=0, minf=31 00:34:31.190 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.190 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.190 filename2: (groupid=0, jobs=1): err= 0: pid=405337: Thu Jul 11 23:47:50 2024 00:34:31.190 read: IOPS=47, BW=189KiB/s (193kB/s)(1920KiB/10170msec) 00:34:31.190 slat (usec): min=8, max=153, avg=65.75, stdev=40.23 00:34:31.190 clat (msec): min=187, max=537, avg=338.34, stdev=82.61 00:34:31.190 lat (msec): min=187, max=537, avg=338.41, stdev=82.63 00:34:31.190 clat percentiles (msec): 00:34:31.190 | 1.00th=[ 188], 5.00th=[ 209], 10.00th=[ 211], 20.00th=[ 230], 00:34:31.190 | 30.00th=[ 284], 40.00th=[ 326], 50.00th=[ 368], 60.00th=[ 380], 00:34:31.190 | 70.00th=[ 393], 80.00th=[ 405], 90.00th=[ 422], 95.00th=[ 447], 00:34:31.190 | 99.00th=[ 493], 99.50th=[ 531], 99.90th=[ 542], 99.95th=[ 542], 00:34:31.190 | 99.99th=[ 542] 00:34:31.190 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=185.60, stdev=74.94, samples=20 00:34:31.190 iops : min= 32, max= 96, avg=46.40, stdev=18.73, samples=20 00:34:31.190 lat (msec) : 250=24.17%, 500=75.00%, 750=0.83% 00:34:31.190 cpu : usr=98.55%, sys=0.94%, ctx=47, majf=0, minf=30 00:34:31.191 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:34:31.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.191 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.191 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.191 filename2: (groupid=0, jobs=1): err= 0: pid=405338: Thu Jul 11 23:47:50 2024 00:34:31.191 read: IOPS=47, BW=188KiB/s (193kB/s)(1912KiB/10162msec) 00:34:31.191 slat (usec): min=8, max=144, avg=36.48, stdev=26.85 00:34:31.191 clat (msec): min=183, max=579, avg=339.48, stdev=81.69 00:34:31.191 lat (msec): min=183, max=579, avg=339.51, stdev=81.67 00:34:31.191 clat percentiles (msec): 00:34:31.191 | 1.00th=[ 186], 5.00th=[ 209], 10.00th=[ 215], 20.00th=[ 234], 00:34:31.191 | 30.00th=[ 284], 40.00th=[ 326], 50.00th=[ 368], 60.00th=[ 380], 00:34:31.191 | 70.00th=[ 393], 80.00th=[ 405], 90.00th=[ 422], 95.00th=[ 451], 00:34:31.191 | 99.00th=[ 527], 99.50th=[ 531], 99.90th=[ 584], 99.95th=[ 584], 00:34:31.191 | 99.99th=[ 584] 00:34:31.191 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=184.80, stdev=72.95, samples=20 00:34:31.191 iops : min= 32, max= 96, avg=46.20, stdev=18.24, samples=20 00:34:31.191 lat (msec) : 250=24.27%, 500=74.06%, 750=1.67% 00:34:31.191 cpu : usr=97.77%, sys=1.41%, ctx=56, majf=0, minf=33 00:34:31.191 IO depths : 1=2.3%, 2=8.6%, 4=25.1%, 8=54.0%, 16=10.0%, 32=0.0%, >=64=0.0% 00:34:31.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.191 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.191 issued rwts: total=478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.191 00:34:31.191 Run status group 0 (all jobs): 00:34:31.191 READ: bw=4767KiB/s (4881kB/s), 183KiB/s-260KiB/s (187kB/s-266kB/s), io=47.5MiB (49.8MB), run=10137-10201msec 00:34:31.191 23:47:50 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:31.191 23:47:50 -- target/dif.sh@43 -- # local sub 00:34:31.191 23:47:50 -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.191 23:47:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:31.191 23:47:50 -- target/dif.sh@36 -- # local sub_id=0 00:34:31.191 23:47:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:31.191 23:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.191 23:47:50 -- common/autotest_common.sh@10 -- # set +x 00:34:31.191 23:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.191 23:47:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:31.191 23:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.191 23:47:50 -- common/autotest_common.sh@10 -- # set +x 00:34:31.191 23:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.191 23:47:50 -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.191 23:47:50 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:31.191 23:47:50 -- target/dif.sh@36 -- # local sub_id=1 00:34:31.191 23:47:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:31.191 23:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.191 23:47:50 -- common/autotest_common.sh@10 -- # set +x 00:34:31.191 23:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.191 23:47:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:31.191 23:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.191 23:47:50 -- common/autotest_common.sh@10 -- # set +x 00:34:31.191 23:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.191 23:47:50 -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.191 23:47:50 -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:31.191 23:47:50 -- target/dif.sh@36 -- # local sub_id=2 00:34:31.191 23:47:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:31.191 23:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.191 23:47:50 -- common/autotest_common.sh@10 -- # set +x 00:34:31.191 23:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.191 23:47:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:31.191 23:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.191 23:47:50 -- common/autotest_common.sh@10 -- # set +x 00:34:31.191 23:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.191 23:47:50 -- target/dif.sh@115 -- # NULL_DIF=1 00:34:31.191 23:47:50 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:31.191 23:47:50 -- target/dif.sh@115 -- # numjobs=2 00:34:31.191 23:47:50 -- target/dif.sh@115 -- # iodepth=8 00:34:31.191 23:47:50 -- target/dif.sh@115 -- # runtime=5 00:34:31.191 23:47:50 -- target/dif.sh@115 -- # files=1 00:34:31.191 23:47:50 -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:31.191 23:47:50 -- target/dif.sh@28 -- # local sub 00:34:31.191 23:47:50 -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.191 23:47:50 -- target/dif.sh@31 -- # create_subsystem 0 00:34:31.191 23:47:50 -- target/dif.sh@18 -- # local sub_id=0 00:34:31.191 23:47:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:31.191 23:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.191 23:47:50 -- common/autotest_common.sh@10 -- # set +x 00:34:31.191 bdev_null0 00:34:31.191 23:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.191 23:47:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:31.191 23:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.191 23:47:50 -- common/autotest_common.sh@10 -- # set +x 00:34:31.191 23:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.191 23:47:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:31.191 23:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.191 23:47:50 -- common/autotest_common.sh@10 -- # set +x 00:34:31.191 23:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.191 23:47:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:31.191 23:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.191 23:47:50 -- common/autotest_common.sh@10 -- # set +x 00:34:31.191 [2024-07-11 23:47:50.537509] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.191 23:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.191 23:47:50 -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.191 23:47:50 -- target/dif.sh@31 -- # create_subsystem 1 00:34:31.191 23:47:50 -- target/dif.sh@18 -- # local sub_id=1 00:34:31.191 23:47:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:31.191 23:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.191 23:47:50 -- common/autotest_common.sh@10 -- # set +x 00:34:31.191 bdev_null1 00:34:31.191 23:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.191 23:47:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:31.191 23:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.191 23:47:50 -- common/autotest_common.sh@10 -- # set +x 00:34:31.191 23:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.191 23:47:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:31.191 23:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.191 23:47:50 -- common/autotest_common.sh@10 -- # set +x 00:34:31.191 23:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.191 23:47:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:31.191 23:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.191 23:47:50 -- common/autotest_common.sh@10 -- # set +x 00:34:31.191 23:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.191 23:47:50 -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:31.191 23:47:50 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:31.191 23:47:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:31.191 23:47:50 -- nvmf/common.sh@520 -- # config=() 00:34:31.191 23:47:50 -- nvmf/common.sh@520 -- # local subsystem config 00:34:31.191 23:47:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:31.191 23:47:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:31.191 { 00:34:31.191 "params": { 00:34:31.191 "name": "Nvme$subsystem", 00:34:31.191 "trtype": "$TEST_TRANSPORT", 00:34:31.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.191 "adrfam": "ipv4", 00:34:31.191 "trsvcid": "$NVMF_PORT", 00:34:31.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.191 "hdgst": ${hdgst:-false}, 00:34:31.191 "ddgst": ${ddgst:-false} 00:34:31.191 }, 00:34:31.191 "method": "bdev_nvme_attach_controller" 00:34:31.191 } 00:34:31.191 EOF 00:34:31.191 )") 00:34:31.191 23:47:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.191 23:47:50 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.191 23:47:50 -- target/dif.sh@82 -- # gen_fio_conf 00:34:31.191 23:47:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:31.191 23:47:50 -- target/dif.sh@54 -- # local file 00:34:31.191 23:47:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:31.191 23:47:50 -- target/dif.sh@56 -- # cat 00:34:31.191 23:47:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:31.191 23:47:50 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.191 23:47:50 -- common/autotest_common.sh@1320 -- # shift 00:34:31.191 23:47:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:31.191 23:47:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.191 23:47:50 -- nvmf/common.sh@542 -- # cat 00:34:31.191 23:47:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.191 23:47:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:31.191 23:47:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:31.191 23:47:50 -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.191 23:47:50 -- target/dif.sh@73 -- # cat 00:34:31.191 23:47:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:31.191 23:47:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:31.191 23:47:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:31.191 { 00:34:31.191 "params": { 00:34:31.191 "name": "Nvme$subsystem", 00:34:31.191 "trtype": "$TEST_TRANSPORT", 00:34:31.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.191 "adrfam": "ipv4", 00:34:31.191 "trsvcid": "$NVMF_PORT", 00:34:31.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.191 "hdgst": ${hdgst:-false}, 00:34:31.191 "ddgst": ${ddgst:-false} 00:34:31.191 }, 00:34:31.191 "method": "bdev_nvme_attach_controller" 00:34:31.191 } 00:34:31.191 EOF 00:34:31.191 )") 00:34:31.191 23:47:50 -- nvmf/common.sh@542 -- # cat 00:34:31.191 23:47:50 -- target/dif.sh@72 -- # (( file++ )) 00:34:31.191 23:47:50 -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.191 23:47:50 -- nvmf/common.sh@544 -- # jq . 00:34:31.191 23:47:50 -- nvmf/common.sh@545 -- # IFS=, 00:34:31.192 23:47:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:31.192 "params": { 00:34:31.192 "name": "Nvme0", 00:34:31.192 "trtype": "tcp", 00:34:31.192 "traddr": "10.0.0.2", 00:34:31.192 "adrfam": "ipv4", 00:34:31.192 "trsvcid": "4420", 00:34:31.192 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:31.192 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:31.192 "hdgst": false, 00:34:31.192 "ddgst": false 00:34:31.192 }, 00:34:31.192 "method": "bdev_nvme_attach_controller" 00:34:31.192 },{ 00:34:31.192 "params": { 00:34:31.192 "name": "Nvme1", 00:34:31.192 "trtype": "tcp", 00:34:31.192 "traddr": "10.0.0.2", 00:34:31.192 "adrfam": "ipv4", 00:34:31.192 "trsvcid": "4420", 00:34:31.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:31.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:31.192 "hdgst": false, 00:34:31.192 "ddgst": false 00:34:31.192 }, 00:34:31.192 "method": "bdev_nvme_attach_controller" 00:34:31.192 }' 00:34:31.192 23:47:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:31.192 23:47:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:31.192 23:47:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.192 23:47:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.192 23:47:50 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:31.192 23:47:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:31.192 23:47:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:31.192 23:47:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:31.192 23:47:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:31.192 23:47:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.192 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:31.192 ... 00:34:31.192 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:31.192 ... 00:34:31.192 fio-3.35 00:34:31.192 Starting 4 threads 00:34:31.192 EAL: No free 2048 kB hugepages reported on node 1 00:34:31.192 [2024-07-11 23:47:51.703768] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:31.192 [2024-07-11 23:47:51.703831] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:36.449 00:34:36.449 filename0: (groupid=0, jobs=1): err= 0: pid=406769: Thu Jul 11 23:47:56 2024 00:34:36.449 read: IOPS=1842, BW=14.4MiB/s (15.1MB/s)(72.0MiB/5002msec) 00:34:36.449 slat (nsec): min=4218, max=45500, avg=12738.82, stdev=6122.72 00:34:36.449 clat (usec): min=1327, max=49295, avg=4302.47, stdev=1495.17 00:34:36.449 lat (usec): min=1335, max=49308, avg=4315.21, stdev=1494.89 00:34:36.449 clat percentiles (usec): 00:34:36.449 | 1.00th=[ 2966], 5.00th=[ 3359], 10.00th=[ 3589], 20.00th=[ 3785], 00:34:36.449 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4178], 60.00th=[ 4293], 00:34:36.449 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 5211], 95.00th=[ 5866], 00:34:36.449 | 99.00th=[ 6521], 99.50th=[ 6718], 99.90th=[ 7504], 99.95th=[49021], 00:34:36.449 | 99.99th=[49546] 00:34:36.449 bw ( KiB/s): min=14076, max=15088, per=24.85%, avg=14719.56, stdev=294.49, samples=9 00:34:36.449 iops : min= 1759, max= 1886, avg=1839.89, stdev=36.95, samples=9 00:34:36.449 lat (msec) : 2=0.09%, 4=34.39%, 10=65.44%, 50=0.09% 00:34:36.449 cpu : usr=95.00%, sys=4.08%, ctx=164, majf=0, minf=9 00:34:36.449 IO depths : 1=0.1%, 2=3.0%, 4=69.1%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.449 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.449 issued rwts: total=9215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.449 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:36.449 filename0: (groupid=0, jobs=1): err= 0: pid=406770: Thu Jul 11 23:47:56 2024 00:34:36.449 read: IOPS=1886, BW=14.7MiB/s (15.5MB/s)(73.7MiB/5003msec) 00:34:36.449 slat (nsec): min=4034, max=79355, avg=16214.74, stdev=6739.40 00:34:36.449 clat (usec): min=1016, max=8185, avg=4191.81, stdev=751.91 00:34:36.449 lat (usec): min=1034, max=8197, avg=4208.02, stdev=751.53 00:34:36.449 clat percentiles (usec): 00:34:36.449 | 1.00th=[ 2671], 5.00th=[ 3163], 10.00th=[ 3425], 20.00th=[ 3687], 00:34:36.449 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4228], 00:34:36.449 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 5276], 95.00th=[ 5866], 00:34:36.449 | 99.00th=[ 6521], 99.50th=[ 6718], 99.90th=[ 7242], 99.95th=[ 8029], 00:34:36.449 | 99.99th=[ 8160] 00:34:36.449 bw ( KiB/s): min=14544, max=16689, per=25.49%, avg=15096.10, stdev=595.83, samples=10 00:34:36.449 iops : min= 1818, max= 2086, avg=1887.00, stdev=74.44, samples=10 00:34:36.449 lat (msec) : 2=0.11%, 4=42.35%, 10=57.55% 00:34:36.449 cpu : usr=94.70%, sys=4.50%, ctx=15, majf=0, minf=9 00:34:36.449 IO depths : 1=0.1%, 2=2.6%, 4=69.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.449 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.449 issued rwts: total=9436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.449 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:36.449 filename1: (groupid=0, jobs=1): err= 0: pid=406771: Thu Jul 11 23:47:56 2024 00:34:36.449 read: IOPS=1838, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5003msec) 00:34:36.449 slat (nsec): min=3856, max=94915, avg=12291.42, stdev=5500.18 00:34:36.449 clat (usec): min=1726, max=11355, avg=4311.38, stdev=722.23 00:34:36.449 lat (usec): min=1734, max=11368, avg=4323.67, stdev=722.06 00:34:36.449 clat percentiles (usec): 00:34:36.449 | 1.00th=[ 2999], 5.00th=[ 3490], 10.00th=[ 3687], 20.00th=[ 3851], 00:34:36.449 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4178], 60.00th=[ 4293], 00:34:36.449 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 5211], 95.00th=[ 5932], 00:34:36.449 | 99.00th=[ 6652], 99.50th=[ 7046], 99.90th=[ 8225], 99.95th=[ 9110], 00:34:36.449 | 99.99th=[11338] 00:34:36.449 bw ( KiB/s): min=14096, max=15680, per=24.84%, avg=14712.00, stdev=421.05, samples=10 00:34:36.449 iops : min= 1762, max= 1960, avg=1839.00, stdev=52.63, samples=10 00:34:36.449 lat (msec) : 2=0.03%, 4=32.58%, 10=67.37%, 20=0.02% 00:34:36.449 cpu : usr=91.66%, sys=5.72%, ctx=220, majf=0, minf=0 00:34:36.449 IO depths : 1=0.4%, 2=1.4%, 4=71.7%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.449 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.449 issued rwts: total=9200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.449 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:36.449 filename1: (groupid=0, jobs=1): err= 0: pid=406772: Thu Jul 11 23:47:56 2024 00:34:36.449 read: IOPS=1837, BW=14.4MiB/s (15.1MB/s)(71.8MiB/5001msec) 00:34:36.449 slat (nsec): min=4209, max=48968, avg=13153.93, stdev=6135.58 00:34:36.449 clat (usec): min=1775, max=7648, avg=4315.48, stdev=794.65 00:34:36.449 lat (usec): min=1783, max=7663, avg=4328.63, stdev=793.78 00:34:36.449 clat percentiles (usec): 00:34:36.449 | 1.00th=[ 2769], 5.00th=[ 3294], 10.00th=[ 3556], 20.00th=[ 3818], 00:34:36.449 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4178], 60.00th=[ 4293], 00:34:36.449 | 70.00th=[ 4359], 80.00th=[ 4686], 90.00th=[ 5604], 95.00th=[ 6063], 00:34:36.449 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7242], 99.95th=[ 7439], 00:34:36.449 | 99.99th=[ 7635] 00:34:36.449 bw ( KiB/s): min=14096, max=16208, per=24.77%, avg=14673.78, stdev=606.76, samples=9 00:34:36.449 iops : min= 1762, max= 2026, avg=1834.22, stdev=75.84, samples=9 00:34:36.449 lat (msec) : 2=0.09%, 4=37.34%, 10=62.57% 00:34:36.449 cpu : usr=96.16%, sys=3.40%, ctx=7, majf=0, minf=9 00:34:36.449 IO depths : 1=0.1%, 2=1.4%, 4=70.3%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.449 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.449 issued rwts: total=9188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.449 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:36.449 00:34:36.449 Run status group 0 (all jobs): 00:34:36.449 READ: bw=57.8MiB/s (60.6MB/s), 14.4MiB/s-14.7MiB/s (15.1MB/s-15.5MB/s), io=289MiB (303MB), run=5001-5003msec 00:34:36.449 23:47:57 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:36.449 23:47:57 -- target/dif.sh@43 -- # local sub 00:34:36.449 23:47:57 -- target/dif.sh@45 -- # for sub in "$@" 00:34:36.449 23:47:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:36.449 23:47:57 -- target/dif.sh@36 -- # local sub_id=0 00:34:36.449 23:47:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:36.449 23:47:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.449 23:47:57 -- common/autotest_common.sh@10 -- # set +x 00:34:36.449 23:47:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.449 23:47:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:36.449 23:47:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.449 23:47:57 -- common/autotest_common.sh@10 -- # set +x 00:34:36.449 23:47:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.449 23:47:57 -- target/dif.sh@45 -- # for sub in "$@" 00:34:36.449 23:47:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:36.449 23:47:57 -- target/dif.sh@36 -- # local sub_id=1 00:34:36.449 23:47:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:36.449 23:47:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.449 23:47:57 -- common/autotest_common.sh@10 -- # set +x 00:34:36.449 23:47:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.449 23:47:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:36.449 23:47:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.449 23:47:57 -- common/autotest_common.sh@10 -- # set +x 00:34:36.449 23:47:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.449 00:34:36.449 real 0m24.657s 00:34:36.449 user 4m38.435s 00:34:36.450 sys 0m5.279s 00:34:36.450 23:47:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:36.450 23:47:57 -- common/autotest_common.sh@10 -- # set +x 00:34:36.450 ************************************ 00:34:36.450 END TEST fio_dif_rand_params 00:34:36.450 ************************************ 00:34:36.450 23:47:57 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:36.450 23:47:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:36.450 23:47:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:36.450 23:47:57 -- common/autotest_common.sh@10 -- # set +x 00:34:36.450 ************************************ 00:34:36.450 START TEST fio_dif_digest 00:34:36.450 ************************************ 00:34:36.450 23:47:57 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:34:36.450 23:47:57 -- target/dif.sh@123 -- # local NULL_DIF 00:34:36.450 23:47:57 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:36.450 23:47:57 -- target/dif.sh@125 -- # local hdgst ddgst 00:34:36.450 23:47:57 -- target/dif.sh@127 -- # NULL_DIF=3 00:34:36.450 23:47:57 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:36.450 23:47:57 -- target/dif.sh@127 -- # numjobs=3 00:34:36.450 23:47:57 -- target/dif.sh@127 -- # iodepth=3 00:34:36.450 23:47:57 -- target/dif.sh@127 -- # runtime=10 00:34:36.450 23:47:57 -- target/dif.sh@128 -- # hdgst=true 00:34:36.450 23:47:57 -- target/dif.sh@128 -- # ddgst=true 00:34:36.450 23:47:57 -- target/dif.sh@130 -- # create_subsystems 0 00:34:36.450 23:47:57 -- target/dif.sh@28 -- # local sub 00:34:36.450 23:47:57 -- target/dif.sh@30 -- # for sub in "$@" 00:34:36.450 23:47:57 -- target/dif.sh@31 -- # create_subsystem 0 00:34:36.450 23:47:57 -- target/dif.sh@18 -- # local sub_id=0 00:34:36.450 23:47:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:36.450 23:47:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.450 23:47:57 -- common/autotest_common.sh@10 -- # set +x 00:34:36.450 bdev_null0 00:34:36.450 23:47:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.450 23:47:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:36.450 23:47:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.450 23:47:57 -- common/autotest_common.sh@10 -- # set +x 00:34:36.450 23:47:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.450 23:47:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:36.450 23:47:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.450 23:47:57 -- common/autotest_common.sh@10 -- # set +x 00:34:36.450 23:47:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.450 23:47:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:36.450 23:47:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.450 23:47:57 -- common/autotest_common.sh@10 -- # set +x 00:34:36.450 [2024-07-11 23:47:57.191291] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.450 23:47:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.450 23:47:57 -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:36.450 23:47:57 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:36.450 23:47:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:36.450 23:47:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.450 23:47:57 -- nvmf/common.sh@520 -- # config=() 00:34:36.450 23:47:57 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.450 23:47:57 -- nvmf/common.sh@520 -- # local subsystem config 00:34:36.450 23:47:57 -- target/dif.sh@82 -- # gen_fio_conf 00:34:36.450 23:47:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:36.450 23:47:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:36.450 23:47:57 -- target/dif.sh@54 -- # local file 00:34:36.450 23:47:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:36.450 23:47:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:36.450 { 00:34:36.450 "params": { 00:34:36.450 "name": "Nvme$subsystem", 00:34:36.450 "trtype": "$TEST_TRANSPORT", 00:34:36.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:36.450 "adrfam": "ipv4", 00:34:36.450 "trsvcid": "$NVMF_PORT", 00:34:36.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:36.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:36.450 "hdgst": ${hdgst:-false}, 00:34:36.450 "ddgst": ${ddgst:-false} 00:34:36.450 }, 00:34:36.450 "method": "bdev_nvme_attach_controller" 00:34:36.450 } 00:34:36.450 EOF 00:34:36.450 )") 00:34:36.450 23:47:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:36.450 23:47:57 -- target/dif.sh@56 -- # cat 00:34:36.450 23:47:57 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.450 23:47:57 -- common/autotest_common.sh@1320 -- # shift 00:34:36.450 23:47:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:36.450 23:47:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.450 23:47:57 -- nvmf/common.sh@542 -- # cat 00:34:36.450 23:47:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.450 23:47:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:36.450 23:47:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:36.450 23:47:57 -- target/dif.sh@72 -- # (( file <= files )) 00:34:36.450 23:47:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:36.450 23:47:57 -- nvmf/common.sh@544 -- # jq . 00:34:36.450 23:47:57 -- nvmf/common.sh@545 -- # IFS=, 00:34:36.450 23:47:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:36.450 "params": { 00:34:36.450 "name": "Nvme0", 00:34:36.450 "trtype": "tcp", 00:34:36.450 "traddr": "10.0.0.2", 00:34:36.450 "adrfam": "ipv4", 00:34:36.450 "trsvcid": "4420", 00:34:36.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:36.450 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:36.450 "hdgst": true, 00:34:36.450 "ddgst": true 00:34:36.450 }, 00:34:36.450 "method": "bdev_nvme_attach_controller" 00:34:36.450 }' 00:34:36.450 23:47:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:36.450 23:47:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:36.450 23:47:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.450 23:47:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.450 23:47:57 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:36.450 23:47:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:36.450 23:47:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:36.450 23:47:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:36.450 23:47:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:36.450 23:47:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.708 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:36.708 ... 00:34:36.708 fio-3.35 00:34:36.708 Starting 3 threads 00:34:36.708 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.966 [2024-07-11 23:47:57.869388] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:36.966 [2024-07-11 23:47:57.869484] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:49.156 00:34:49.156 filename0: (groupid=0, jobs=1): err= 0: pid=407660: Thu Jul 11 23:48:08 2024 00:34:49.156 read: IOPS=148, BW=18.6MiB/s (19.5MB/s)(187MiB/10048msec) 00:34:49.156 slat (nsec): min=4978, max=45211, avg=15551.48, stdev=2676.55 00:34:49.156 clat (usec): min=8193, max=96034, avg=20125.00, stdev=11268.16 00:34:49.156 lat (usec): min=8207, max=96049, avg=20140.55, stdev=11268.15 00:34:49.156 clat percentiles (usec): 00:34:49.156 | 1.00th=[ 9634], 5.00th=[12256], 10.00th=[14877], 20.00th=[15795], 00:34:49.156 | 30.00th=[16450], 40.00th=[16909], 50.00th=[17433], 60.00th=[17957], 00:34:49.156 | 70.00th=[18482], 80.00th=[19006], 90.00th=[20317], 95.00th=[57410], 00:34:49.156 | 99.00th=[60556], 99.50th=[61080], 99.90th=[62129], 99.95th=[95945], 00:34:49.156 | 99.99th=[95945] 00:34:49.156 bw ( KiB/s): min=14848, max=22784, per=26.95%, avg=19097.60, stdev=2629.65, samples=20 00:34:49.156 iops : min= 116, max= 178, avg=149.20, stdev=20.54, samples=20 00:34:49.156 lat (msec) : 10=1.27%, 20=87.08%, 50=3.95%, 100=7.70% 00:34:49.156 cpu : usr=93.28%, sys=6.22%, ctx=19, majf=0, minf=143 00:34:49.156 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.156 issued rwts: total=1494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.156 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:49.156 filename0: (groupid=0, jobs=1): err= 0: pid=407661: Thu Jul 11 23:48:08 2024 00:34:49.156 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(257MiB/10006msec) 00:34:49.156 slat (nsec): min=4730, max=54088, avg=17675.56, stdev=6363.38 00:34:49.156 clat (usec): min=6797, max=94704, avg=14562.76, stdev=5577.53 00:34:49.156 lat (usec): min=6810, max=94721, avg=14580.43, stdev=5577.49 00:34:49.156 clat percentiles (usec): 00:34:49.156 | 1.00th=[ 7570], 5.00th=[10159], 10.00th=[10683], 20.00th=[12125], 00:34:49.156 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14484], 60.00th=[14877], 00:34:49.156 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:34:49.156 | 99.00th=[52691], 99.50th=[54789], 99.90th=[56886], 99.95th=[57410], 00:34:49.156 | 99.99th=[94897] 00:34:49.156 bw ( KiB/s): min=22272, max=29440, per=37.13%, avg=26306.50, stdev=1794.82, samples=20 00:34:49.156 iops : min= 174, max= 230, avg=205.50, stdev=14.04, samples=20 00:34:49.156 lat (msec) : 10=4.62%, 20=93.83%, 50=0.10%, 100=1.46% 00:34:49.156 cpu : usr=92.39%, sys=7.00%, ctx=20, majf=0, minf=154 00:34:49.156 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.156 issued rwts: total=2058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.156 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:49.156 filename0: (groupid=0, jobs=1): err= 0: pid=407662: Thu Jul 11 23:48:08 2024 00:34:49.156 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(251MiB/10047msec) 00:34:49.156 slat (nsec): min=4932, max=35474, avg=15301.51, stdev=2904.69 00:34:49.156 clat (usec): min=6959, max=58610, avg=14953.83, stdev=6163.83 00:34:49.156 lat (usec): min=6973, max=58624, avg=14969.13, stdev=6163.81 00:34:49.156 clat percentiles (usec): 00:34:49.156 | 1.00th=[ 8094], 5.00th=[10421], 10.00th=[11076], 20.00th=[12649], 00:34:49.156 | 30.00th=[13566], 40.00th=[14222], 50.00th=[14615], 60.00th=[14877], 00:34:49.156 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16909], 00:34:49.156 | 99.00th=[54789], 99.50th=[55837], 99.90th=[57934], 99.95th=[57934], 00:34:49.156 | 99.99th=[58459] 00:34:49.156 bw ( KiB/s): min=22528, max=28928, per=36.27%, avg=25702.40, stdev=1841.18, samples=20 00:34:49.156 iops : min= 176, max= 226, avg=200.80, stdev=14.38, samples=20 00:34:49.156 lat (msec) : 10=3.38%, 20=94.43%, 50=0.05%, 100=2.14% 00:34:49.156 cpu : usr=92.15%, sys=7.33%, ctx=21, majf=0, minf=168 00:34:49.156 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.156 issued rwts: total=2010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.156 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:49.156 00:34:49.156 Run status group 0 (all jobs): 00:34:49.156 READ: bw=69.2MiB/s (72.6MB/s), 18.6MiB/s-25.7MiB/s (19.5MB/s-27.0MB/s), io=695MiB (729MB), run=10006-10048msec 00:34:49.156 23:48:08 -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:49.156 23:48:08 -- target/dif.sh@43 -- # local sub 00:34:49.156 23:48:08 -- target/dif.sh@45 -- # for sub in "$@" 00:34:49.156 23:48:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:49.156 23:48:08 -- target/dif.sh@36 -- # local sub_id=0 00:34:49.156 23:48:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:49.156 23:48:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:49.156 23:48:08 -- common/autotest_common.sh@10 -- # set +x 00:34:49.156 23:48:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:49.156 23:48:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:49.156 23:48:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:49.157 23:48:08 -- common/autotest_common.sh@10 -- # set +x 00:34:49.157 23:48:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:49.157 00:34:49.157 real 0m11.157s 00:34:49.157 user 0m28.958s 00:34:49.157 sys 0m2.359s 00:34:49.157 23:48:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:49.157 23:48:08 -- common/autotest_common.sh@10 -- # set +x 00:34:49.157 ************************************ 00:34:49.157 END TEST fio_dif_digest 00:34:49.157 ************************************ 00:34:49.157 23:48:08 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:49.157 23:48:08 -- target/dif.sh@147 -- # nvmftestfini 00:34:49.157 23:48:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:49.157 23:48:08 -- nvmf/common.sh@116 -- # sync 00:34:49.157 23:48:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:49.157 23:48:08 -- nvmf/common.sh@119 -- # set +e 00:34:49.157 23:48:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:49.157 23:48:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:49.157 rmmod nvme_tcp 00:34:49.157 rmmod nvme_fabrics 00:34:49.157 rmmod nvme_keyring 00:34:49.157 23:48:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:49.157 23:48:08 -- nvmf/common.sh@123 -- # set -e 00:34:49.157 23:48:08 -- nvmf/common.sh@124 -- # return 0 00:34:49.157 23:48:08 -- nvmf/common.sh@477 -- # '[' -n 401196 ']' 00:34:49.157 23:48:08 -- nvmf/common.sh@478 -- # killprocess 401196 00:34:49.157 23:48:08 -- common/autotest_common.sh@926 -- # '[' -z 401196 ']' 00:34:49.157 23:48:08 -- common/autotest_common.sh@930 -- # kill -0 401196 00:34:49.157 23:48:08 -- common/autotest_common.sh@931 -- # uname 00:34:49.157 23:48:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:49.157 23:48:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 401196 00:34:49.157 23:48:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:49.157 23:48:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:49.157 23:48:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 401196' 00:34:49.157 killing process with pid 401196 00:34:49.157 23:48:08 -- common/autotest_common.sh@945 -- # kill 401196 00:34:49.157 23:48:08 -- common/autotest_common.sh@950 -- # wait 401196 00:34:49.157 23:48:08 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:34:49.157 23:48:08 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:49.157 Waiting for block devices as requested 00:34:49.415 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:34:49.415 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:49.675 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:49.675 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:49.675 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:49.936 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:49.936 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:49.936 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:49.936 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:50.196 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:50.196 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:50.196 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:50.196 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:50.454 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:50.454 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:50.454 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:50.713 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:50.713 23:48:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:50.713 23:48:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:50.713 23:48:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:50.713 23:48:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:50.713 23:48:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.713 23:48:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:50.713 23:48:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.620 23:48:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:52.620 00:34:52.620 real 1m9.614s 00:34:52.620 user 6m36.704s 00:34:52.620 sys 0m18.327s 00:34:52.620 23:48:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:52.620 23:48:13 -- common/autotest_common.sh@10 -- # set +x 00:34:52.620 ************************************ 00:34:52.620 END TEST nvmf_dif 00:34:52.620 ************************************ 00:34:52.880 23:48:13 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:52.880 23:48:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:52.880 23:48:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:52.880 23:48:13 -- common/autotest_common.sh@10 -- # set +x 00:34:52.880 ************************************ 00:34:52.880 START TEST nvmf_abort_qd_sizes 00:34:52.880 ************************************ 00:34:52.880 23:48:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:52.880 * Looking for test storage... 00:34:52.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:52.880 23:48:13 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:52.880 23:48:13 -- nvmf/common.sh@7 -- # uname -s 00:34:52.880 23:48:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:52.880 23:48:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:52.880 23:48:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:52.880 23:48:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:52.880 23:48:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:52.880 23:48:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:52.880 23:48:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:52.880 23:48:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:52.880 23:48:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:52.880 23:48:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:52.880 23:48:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:52.880 23:48:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:52.880 23:48:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:52.880 23:48:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:52.880 23:48:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:52.880 23:48:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:52.880 23:48:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:52.880 23:48:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:52.880 23:48:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:52.880 23:48:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.880 23:48:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.880 23:48:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.880 23:48:13 -- paths/export.sh@5 -- # export PATH 00:34:52.880 23:48:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.880 23:48:13 -- nvmf/common.sh@46 -- # : 0 00:34:52.880 23:48:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:52.880 23:48:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:52.880 23:48:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:52.880 23:48:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:52.880 23:48:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:52.880 23:48:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:52.880 23:48:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:52.880 23:48:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:52.880 23:48:13 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:34:52.880 23:48:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:52.880 23:48:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:52.880 23:48:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:52.880 23:48:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:52.880 23:48:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:52.880 23:48:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:52.880 23:48:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:52.880 23:48:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.880 23:48:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:34:52.880 23:48:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:52.880 23:48:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:52.880 23:48:13 -- common/autotest_common.sh@10 -- # set +x 00:34:55.417 23:48:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:55.417 23:48:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:55.417 23:48:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:55.417 23:48:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:55.417 23:48:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:55.417 23:48:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:55.417 23:48:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:55.417 23:48:16 -- nvmf/common.sh@294 -- # net_devs=() 00:34:55.417 23:48:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:55.417 23:48:16 -- nvmf/common.sh@295 -- # e810=() 00:34:55.417 23:48:16 -- nvmf/common.sh@295 -- # local -ga e810 00:34:55.417 23:48:16 -- nvmf/common.sh@296 -- # x722=() 00:34:55.417 23:48:16 -- nvmf/common.sh@296 -- # local -ga x722 00:34:55.417 23:48:16 -- nvmf/common.sh@297 -- # mlx=() 00:34:55.417 23:48:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:55.417 23:48:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:55.417 23:48:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:55.418 23:48:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:55.418 23:48:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:55.418 23:48:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:55.418 23:48:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:55.418 23:48:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:55.418 23:48:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:55.418 23:48:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:55.418 23:48:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:55.418 23:48:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:55.418 23:48:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:55.418 23:48:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:34:55.418 23:48:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:55.418 23:48:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:55.418 23:48:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:55.418 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:55.418 23:48:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:55.418 23:48:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:55.418 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:55.418 23:48:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:55.418 23:48:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:55.418 23:48:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.418 23:48:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:55.418 23:48:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.418 23:48:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:55.418 Found net devices under 0000:84:00.0: cvl_0_0 00:34:55.418 23:48:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.418 23:48:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:55.418 23:48:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.418 23:48:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:55.418 23:48:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.418 23:48:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:55.418 Found net devices under 0000:84:00.1: cvl_0_1 00:34:55.418 23:48:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.418 23:48:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:55.418 23:48:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:55.418 23:48:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:55.418 23:48:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:55.418 23:48:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:55.418 23:48:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:55.418 23:48:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:55.418 23:48:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:55.418 23:48:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:55.418 23:48:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:55.418 23:48:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:55.418 23:48:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:55.418 23:48:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:55.418 23:48:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:55.418 23:48:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:55.418 23:48:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:55.677 23:48:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:55.677 23:48:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:55.677 23:48:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:55.677 23:48:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:55.677 23:48:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:55.677 23:48:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:55.677 23:48:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:55.677 23:48:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:55.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:55.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:34:55.677 00:34:55.677 --- 10.0.0.2 ping statistics --- 00:34:55.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.677 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:34:55.677 23:48:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:55.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:55.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:34:55.677 00:34:55.677 --- 10.0.0.1 ping statistics --- 00:34:55.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.677 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:34:55.677 23:48:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:55.677 23:48:16 -- nvmf/common.sh@410 -- # return 0 00:34:55.677 23:48:16 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:55.677 23:48:16 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:57.580 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:57.580 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:57.580 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:57.580 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:57.580 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:57.580 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:57.580 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:57.580 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:57.580 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:57.580 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:57.580 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:57.580 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:57.580 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:57.580 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:57.580 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:57.580 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:58.147 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:34:58.406 23:48:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:58.406 23:48:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:58.406 23:48:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:58.406 23:48:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:58.406 23:48:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:58.406 23:48:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:58.406 23:48:19 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:34:58.406 23:48:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:58.406 23:48:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:58.406 23:48:19 -- common/autotest_common.sh@10 -- # set +x 00:34:58.406 23:48:19 -- nvmf/common.sh@469 -- # nvmfpid=413327 00:34:58.406 23:48:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:58.406 23:48:19 -- nvmf/common.sh@470 -- # waitforlisten 413327 00:34:58.406 23:48:19 -- common/autotest_common.sh@819 -- # '[' -z 413327 ']' 00:34:58.406 23:48:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:58.406 23:48:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:58.406 23:48:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:58.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:58.406 23:48:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:58.406 23:48:19 -- common/autotest_common.sh@10 -- # set +x 00:34:58.406 [2024-07-11 23:48:19.265172] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:34:58.406 [2024-07-11 23:48:19.265278] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:58.406 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.406 [2024-07-11 23:48:19.343998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:58.664 [2024-07-11 23:48:19.441045] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:58.664 [2024-07-11 23:48:19.441244] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:58.665 [2024-07-11 23:48:19.441266] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:58.665 [2024-07-11 23:48:19.441282] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:58.665 [2024-07-11 23:48:19.441343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.665 [2024-07-11 23:48:19.441398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:58.665 [2024-07-11 23:48:19.441460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:58.665 [2024-07-11 23:48:19.441462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.922 23:48:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:58.922 23:48:19 -- common/autotest_common.sh@852 -- # return 0 00:34:58.922 23:48:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:58.922 23:48:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:58.922 23:48:19 -- common/autotest_common.sh@10 -- # set +x 00:34:58.922 23:48:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:58.922 23:48:19 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:58.922 23:48:19 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:34:58.922 23:48:19 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:34:58.922 23:48:19 -- scripts/common.sh@311 -- # local bdf bdfs 00:34:58.922 23:48:19 -- scripts/common.sh@312 -- # local nvmes 00:34:58.922 23:48:19 -- scripts/common.sh@314 -- # [[ -n 0000:82:00.0 ]] 00:34:58.922 23:48:19 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:58.922 23:48:19 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:34:58.922 23:48:19 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:34:58.922 23:48:19 -- scripts/common.sh@322 -- # uname -s 00:34:58.922 23:48:19 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:34:58.922 23:48:19 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:34:58.922 23:48:19 -- scripts/common.sh@327 -- # (( 1 )) 00:34:58.922 23:48:19 -- scripts/common.sh@328 -- # printf '%s\n' 0000:82:00.0 00:34:58.922 23:48:19 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:34:58.922 23:48:19 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:82:00.0 00:34:58.922 23:48:19 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:34:58.922 23:48:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:58.922 23:48:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:58.922 23:48:19 -- common/autotest_common.sh@10 -- # set +x 00:34:58.922 ************************************ 00:34:58.922 START TEST spdk_target_abort 00:34:58.922 ************************************ 00:34:58.922 23:48:19 -- common/autotest_common.sh@1104 -- # spdk_target 00:34:58.922 23:48:19 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:58.922 23:48:19 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:34:58.922 23:48:19 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:34:58.922 23:48:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.922 23:48:19 -- common/autotest_common.sh@10 -- # set +x 00:35:02.261 spdk_targetn1 00:35:02.261 23:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:02.261 23:48:22 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:02.261 23:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:02.261 23:48:22 -- common/autotest_common.sh@10 -- # set +x 00:35:02.261 [2024-07-11 23:48:22.510301] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.261 23:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:02.261 23:48:22 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:35:02.261 23:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:02.261 23:48:22 -- common/autotest_common.sh@10 -- # set +x 00:35:02.261 23:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:35:02.262 23:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:02.262 23:48:22 -- common/autotest_common.sh@10 -- # set +x 00:35:02.262 23:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:35:02.262 23:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:02.262 23:48:22 -- common/autotest_common.sh@10 -- # set +x 00:35:02.262 [2024-07-11 23:48:22.543964] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.262 23:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:02.262 23:48:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:02.263 23:48:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:02.263 23:48:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:02.263 23:48:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:02.263 23:48:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:02.263 23:48:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:02.263 23:48:22 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:02.263 EAL: No free 2048 kB hugepages reported on node 1 00:35:04.787 Initializing NVMe Controllers 00:35:04.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:04.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:04.787 Initialization complete. Launching workers. 00:35:04.787 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8502, failed: 0 00:35:04.787 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1440, failed to submit 7062 00:35:04.787 success 841, unsuccess 599, failed 0 00:35:04.787 23:48:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:04.787 23:48:25 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:04.787 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.962 Initializing NVMe Controllers 00:35:08.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:08.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:08.962 Initialization complete. Launching workers. 00:35:08.962 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8687, failed: 0 00:35:08.962 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1219, failed to submit 7468 00:35:08.962 success 346, unsuccess 873, failed 0 00:35:08.962 23:48:29 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:08.962 23:48:29 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:08.962 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.490 Initializing NVMe Controllers 00:35:11.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:11.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:11.490 Initialization complete. Launching workers. 00:35:11.490 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 32109, failed: 0 00:35:11.490 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2674, failed to submit 29435 00:35:11.490 success 551, unsuccess 2123, failed 0 00:35:11.490 23:48:32 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:35:11.490 23:48:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.490 23:48:32 -- common/autotest_common.sh@10 -- # set +x 00:35:11.490 23:48:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.490 23:48:32 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:11.490 23:48:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.490 23:48:32 -- common/autotest_common.sh@10 -- # set +x 00:35:12.860 23:48:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:12.860 23:48:33 -- target/abort_qd_sizes.sh@62 -- # killprocess 413327 00:35:12.860 23:48:33 -- common/autotest_common.sh@926 -- # '[' -z 413327 ']' 00:35:12.860 23:48:33 -- common/autotest_common.sh@930 -- # kill -0 413327 00:35:12.860 23:48:33 -- common/autotest_common.sh@931 -- # uname 00:35:12.860 23:48:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:12.860 23:48:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 413327 00:35:12.860 23:48:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:12.860 23:48:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:12.860 23:48:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 413327' 00:35:12.860 killing process with pid 413327 00:35:12.860 23:48:33 -- common/autotest_common.sh@945 -- # kill 413327 00:35:12.860 23:48:33 -- common/autotest_common.sh@950 -- # wait 413327 00:35:13.120 00:35:13.120 real 0m14.270s 00:35:13.120 user 0m54.127s 00:35:13.120 sys 0m2.841s 00:35:13.120 23:48:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:13.120 23:48:33 -- common/autotest_common.sh@10 -- # set +x 00:35:13.120 ************************************ 00:35:13.120 END TEST spdk_target_abort 00:35:13.120 ************************************ 00:35:13.120 23:48:33 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:35:13.120 23:48:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:13.120 23:48:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:13.120 23:48:33 -- common/autotest_common.sh@10 -- # set +x 00:35:13.120 ************************************ 00:35:13.120 START TEST kernel_target_abort 00:35:13.120 ************************************ 00:35:13.120 23:48:33 -- common/autotest_common.sh@1104 -- # kernel_target 00:35:13.120 23:48:33 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:35:13.120 23:48:33 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:35:13.120 23:48:33 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:35:13.120 23:48:33 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:35:13.120 23:48:33 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:35:13.120 23:48:33 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:13.120 23:48:33 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:13.120 23:48:33 -- nvmf/common.sh@627 -- # local block nvme 00:35:13.120 23:48:33 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:35:13.120 23:48:33 -- nvmf/common.sh@630 -- # modprobe nvmet 00:35:13.120 23:48:34 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:13.120 23:48:34 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:14.497 Waiting for block devices as requested 00:35:14.497 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:35:14.757 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:14.757 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:15.017 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:15.017 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:15.017 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:15.017 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:15.276 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:15.276 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:15.276 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:15.276 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:15.535 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:15.535 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:15.535 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:15.793 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:15.793 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:15.793 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:16.051 23:48:36 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:35:16.051 23:48:36 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:16.051 23:48:36 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:35:16.051 23:48:36 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:35:16.051 23:48:36 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:16.051 No valid GPT data, bailing 00:35:16.051 23:48:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:16.051 23:48:36 -- scripts/common.sh@393 -- # pt= 00:35:16.051 23:48:36 -- scripts/common.sh@394 -- # return 1 00:35:16.051 23:48:36 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:35:16.051 23:48:36 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:35:16.051 23:48:36 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:16.051 23:48:36 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:16.051 23:48:36 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:16.051 23:48:36 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:35:16.051 23:48:36 -- nvmf/common.sh@654 -- # echo 1 00:35:16.051 23:48:36 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:35:16.051 23:48:36 -- nvmf/common.sh@656 -- # echo 1 00:35:16.051 23:48:36 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:35:16.051 23:48:36 -- nvmf/common.sh@663 -- # echo tcp 00:35:16.051 23:48:36 -- nvmf/common.sh@664 -- # echo 4420 00:35:16.051 23:48:36 -- nvmf/common.sh@665 -- # echo ipv4 00:35:16.051 23:48:36 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:16.051 23:48:36 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:35:16.051 00:35:16.051 Discovery Log Number of Records 2, Generation counter 2 00:35:16.051 =====Discovery Log Entry 0====== 00:35:16.051 trtype: tcp 00:35:16.051 adrfam: ipv4 00:35:16.051 subtype: current discovery subsystem 00:35:16.051 treq: not specified, sq flow control disable supported 00:35:16.051 portid: 1 00:35:16.051 trsvcid: 4420 00:35:16.051 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:16.051 traddr: 10.0.0.1 00:35:16.051 eflags: none 00:35:16.051 sectype: none 00:35:16.051 =====Discovery Log Entry 1====== 00:35:16.051 trtype: tcp 00:35:16.051 adrfam: ipv4 00:35:16.051 subtype: nvme subsystem 00:35:16.051 treq: not specified, sq flow control disable supported 00:35:16.051 portid: 1 00:35:16.051 trsvcid: 4420 00:35:16.051 subnqn: kernel_target 00:35:16.051 traddr: 10.0.0.1 00:35:16.051 eflags: none 00:35:16.051 sectype: none 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:16.051 23:48:36 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:16.309 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.587 Initializing NVMe Controllers 00:35:19.587 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:19.587 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:19.587 Initialization complete. Launching workers. 00:35:19.587 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30652, failed: 0 00:35:19.587 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30652, failed to submit 0 00:35:19.587 success 0, unsuccess 30652, failed 0 00:35:19.587 23:48:40 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:19.587 23:48:40 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:19.587 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.866 Initializing NVMe Controllers 00:35:22.866 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:22.866 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:22.866 Initialization complete. Launching workers. 00:35:22.866 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 61503, failed: 0 00:35:22.866 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 15522, failed to submit 45981 00:35:22.866 success 0, unsuccess 15522, failed 0 00:35:22.866 23:48:43 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:22.866 23:48:43 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:22.866 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.437 Initializing NVMe Controllers 00:35:25.437 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:25.437 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:25.437 Initialization complete. Launching workers. 00:35:25.437 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 60209, failed: 0 00:35:25.437 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 15030, failed to submit 45179 00:35:25.437 success 0, unsuccess 15030, failed 0 00:35:25.437 23:48:46 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:35:25.437 23:48:46 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:35:25.437 23:48:46 -- nvmf/common.sh@677 -- # echo 0 00:35:25.437 23:48:46 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:35:25.437 23:48:46 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:25.437 23:48:46 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:25.437 23:48:46 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:25.697 23:48:46 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:35:25.697 23:48:46 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:35:25.697 00:35:25.697 real 0m12.446s 00:35:25.697 user 0m4.432s 00:35:25.697 sys 0m2.913s 00:35:25.697 23:48:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:25.697 23:48:46 -- common/autotest_common.sh@10 -- # set +x 00:35:25.697 ************************************ 00:35:25.697 END TEST kernel_target_abort 00:35:25.697 ************************************ 00:35:25.697 23:48:46 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:35:25.697 23:48:46 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:35:25.697 23:48:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:25.697 23:48:46 -- nvmf/common.sh@116 -- # sync 00:35:25.697 23:48:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:25.697 23:48:46 -- nvmf/common.sh@119 -- # set +e 00:35:25.697 23:48:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:25.697 23:48:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:25.697 rmmod nvme_tcp 00:35:25.697 rmmod nvme_fabrics 00:35:25.697 rmmod nvme_keyring 00:35:25.697 23:48:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:25.697 23:48:46 -- nvmf/common.sh@123 -- # set -e 00:35:25.697 23:48:46 -- nvmf/common.sh@124 -- # return 0 00:35:25.697 23:48:46 -- nvmf/common.sh@477 -- # '[' -n 413327 ']' 00:35:25.697 23:48:46 -- nvmf/common.sh@478 -- # killprocess 413327 00:35:25.697 23:48:46 -- common/autotest_common.sh@926 -- # '[' -z 413327 ']' 00:35:25.697 23:48:46 -- common/autotest_common.sh@930 -- # kill -0 413327 00:35:25.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (413327) - No such process 00:35:25.697 23:48:46 -- common/autotest_common.sh@953 -- # echo 'Process with pid 413327 is not found' 00:35:25.697 Process with pid 413327 is not found 00:35:25.697 23:48:46 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:25.697 23:48:46 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:27.598 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:35:27.598 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:35:27.598 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:35:27.598 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:35:27.598 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:35:27.598 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:35:27.598 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:35:27.598 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:35:27.598 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:35:27.598 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:35:27.598 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:35:27.598 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:35:27.598 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:35:27.598 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:35:27.598 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:35:27.598 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:35:27.598 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:35:27.598 23:48:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:27.598 23:48:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:27.598 23:48:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:27.598 23:48:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:27.598 23:48:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.598 23:48:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:27.598 23:48:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.132 23:48:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:30.132 00:35:30.132 real 0m36.952s 00:35:30.132 user 1m1.280s 00:35:30.132 sys 0m10.479s 00:35:30.132 23:48:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:30.132 23:48:50 -- common/autotest_common.sh@10 -- # set +x 00:35:30.132 ************************************ 00:35:30.132 END TEST nvmf_abort_qd_sizes 00:35:30.132 ************************************ 00:35:30.132 23:48:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:30.132 23:48:50 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:30.132 23:48:50 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:30.132 23:48:50 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:30.132 23:48:50 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:30.132 23:48:50 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:30.132 23:48:50 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:30.132 23:48:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:30.132 23:48:50 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:30.132 23:48:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:30.132 23:48:50 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:30.132 23:48:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:30.132 23:48:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:30.132 23:48:50 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:30.132 23:48:50 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:35:30.132 23:48:50 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:35:30.132 23:48:50 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:35:30.132 23:48:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:30.132 23:48:50 -- common/autotest_common.sh@10 -- # set +x 00:35:30.132 23:48:50 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:35:30.132 23:48:50 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:35:30.132 23:48:50 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:35:30.132 23:48:50 -- common/autotest_common.sh@10 -- # set +x 00:35:32.038 INFO: APP EXITING 00:35:32.038 INFO: killing all VMs 00:35:32.038 INFO: killing vhost app 00:35:32.038 INFO: EXIT DONE 00:35:33.942 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:35:33.942 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:35:33.942 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:35:33.942 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:35:33.942 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:35:33.942 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:35:33.942 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:35:33.942 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:35:33.942 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:35:33.942 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:35:33.942 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:35:33.942 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:35:33.942 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:35:33.942 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:35:33.942 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:35:33.942 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:35:33.942 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:35:35.320 Cleaning 00:35:35.320 Removing: /var/run/dpdk/spdk0/config 00:35:35.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:35.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:35.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:35.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:35.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:35.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:35.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:35.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:35.320 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:35.320 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:35.320 Removing: /var/run/dpdk/spdk1/config 00:35:35.320 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:35.320 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:35.320 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:35.320 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:35.579 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:35.579 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:35.579 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:35.579 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:35.579 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:35.579 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:35.579 Removing: /var/run/dpdk/spdk1/mp_socket 00:35:35.579 Removing: /var/run/dpdk/spdk2/config 00:35:35.579 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:35.579 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:35.579 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:35.579 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:35.579 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:35.579 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:35.579 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:35.579 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:35.579 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:35.579 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:35.579 Removing: /var/run/dpdk/spdk3/config 00:35:35.579 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:35.579 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:35.579 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:35.579 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:35.579 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:35.579 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:35.579 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:35.579 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:35.579 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:35.580 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:35.580 Removing: /var/run/dpdk/spdk4/config 00:35:35.580 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:35.580 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:35.580 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:35.580 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:35.580 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:35.580 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:35.580 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:35.580 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:35.580 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:35.580 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:35.580 Removing: /dev/shm/bdev_svc_trace.1 00:35:35.580 Removing: /dev/shm/nvmf_trace.0 00:35:35.580 Removing: /dev/shm/spdk_tgt_trace.pid128799 00:35:35.580 Removing: /var/run/dpdk/spdk0 00:35:35.580 Removing: /var/run/dpdk/spdk1 00:35:35.580 Removing: /var/run/dpdk/spdk2 00:35:35.580 Removing: /var/run/dpdk/spdk3 00:35:35.580 Removing: /var/run/dpdk/spdk4 00:35:35.580 Removing: /var/run/dpdk/spdk_pid127105 00:35:35.580 Removing: /var/run/dpdk/spdk_pid127856 00:35:35.580 Removing: /var/run/dpdk/spdk_pid128799 00:35:35.580 Removing: /var/run/dpdk/spdk_pid129303 00:35:35.580 Removing: /var/run/dpdk/spdk_pid130766 00:35:35.580 Removing: /var/run/dpdk/spdk_pid131704 00:35:35.580 Removing: /var/run/dpdk/spdk_pid131891 00:35:35.580 Removing: /var/run/dpdk/spdk_pid132341 00:35:35.580 Removing: /var/run/dpdk/spdk_pid132681 00:35:35.580 Removing: /var/run/dpdk/spdk_pid132878 00:35:35.580 Removing: /var/run/dpdk/spdk_pid133037 00:35:35.580 Removing: /var/run/dpdk/spdk_pid133298 00:35:35.580 Removing: /var/run/dpdk/spdk_pid133505 00:35:35.580 Removing: /var/run/dpdk/spdk_pid133968 00:35:35.580 Removing: /var/run/dpdk/spdk_pid137028 00:35:35.580 Removing: /var/run/dpdk/spdk_pid137441 00:35:35.580 Removing: /var/run/dpdk/spdk_pid137962 00:35:35.580 Removing: /var/run/dpdk/spdk_pid138271 00:35:35.580 Removing: /var/run/dpdk/spdk_pid138641 00:35:35.580 Removing: /var/run/dpdk/spdk_pid138735 00:35:35.580 Removing: /var/run/dpdk/spdk_pid139171 00:35:35.580 Removing: /var/run/dpdk/spdk_pid139311 00:35:35.580 Removing: /var/run/dpdk/spdk_pid139613 00:35:35.580 Removing: /var/run/dpdk/spdk_pid139758 00:35:35.580 Removing: /var/run/dpdk/spdk_pid139928 00:35:35.580 Removing: /var/run/dpdk/spdk_pid140159 00:35:35.580 Removing: /var/run/dpdk/spdk_pid140561 00:35:35.580 Removing: /var/run/dpdk/spdk_pid140717 00:35:35.580 Removing: /var/run/dpdk/spdk_pid140913 00:35:35.580 Removing: /var/run/dpdk/spdk_pid141211 00:35:35.580 Removing: /var/run/dpdk/spdk_pid141241 00:35:35.580 Removing: /var/run/dpdk/spdk_pid141418 00:35:35.580 Removing: /var/run/dpdk/spdk_pid141558 00:35:35.580 Removing: /var/run/dpdk/spdk_pid141722 00:35:35.580 Removing: /var/run/dpdk/spdk_pid141984 00:35:35.580 Removing: /var/run/dpdk/spdk_pid142146 00:35:35.580 Removing: /var/run/dpdk/spdk_pid142288 00:35:35.580 Removing: /var/run/dpdk/spdk_pid142482 00:35:35.580 Removing: /var/run/dpdk/spdk_pid142706 00:35:35.580 Removing: /var/run/dpdk/spdk_pid142874 00:35:35.838 Removing: /var/run/dpdk/spdk_pid143015 00:35:35.838 Removing: /var/run/dpdk/spdk_pid143256 00:35:35.838 Removing: /var/run/dpdk/spdk_pid143433 00:35:35.838 Removing: /var/run/dpdk/spdk_pid143594 00:35:35.838 Removing: /var/run/dpdk/spdk_pid143742 00:35:35.838 Removing: /var/run/dpdk/spdk_pid144014 00:35:35.838 Removing: /var/run/dpdk/spdk_pid144160 00:35:35.838 Removing: /var/run/dpdk/spdk_pid144315 00:35:35.838 Removing: /var/run/dpdk/spdk_pid144476 00:35:35.838 Removing: /var/run/dpdk/spdk_pid144741 00:35:35.838 Removing: /var/run/dpdk/spdk_pid144882 00:35:35.838 Removing: /var/run/dpdk/spdk_pid145042 00:35:35.838 Removing: /var/run/dpdk/spdk_pid145257 00:35:35.838 Removing: /var/run/dpdk/spdk_pid145466 00:35:35.838 Removing: /var/run/dpdk/spdk_pid145604 00:35:35.838 Removing: /var/run/dpdk/spdk_pid145767 00:35:35.838 Removing: /var/run/dpdk/spdk_pid146028 00:35:35.838 Removing: /var/run/dpdk/spdk_pid146189 00:35:35.838 Removing: /var/run/dpdk/spdk_pid146331 00:35:35.838 Removing: /var/run/dpdk/spdk_pid146583 00:35:35.838 Removing: /var/run/dpdk/spdk_pid146753 00:35:35.838 Removing: /var/run/dpdk/spdk_pid146910 00:35:35.838 Removing: /var/run/dpdk/spdk_pid147054 00:35:35.838 Removing: /var/run/dpdk/spdk_pid147329 00:35:35.838 Removing: /var/run/dpdk/spdk_pid147480 00:35:35.838 Removing: /var/run/dpdk/spdk_pid147639 00:35:35.838 Removing: /var/run/dpdk/spdk_pid147789 00:35:35.838 Removing: /var/run/dpdk/spdk_pid148066 00:35:35.838 Removing: /var/run/dpdk/spdk_pid148212 00:35:35.838 Removing: /var/run/dpdk/spdk_pid148370 00:35:35.838 Removing: /var/run/dpdk/spdk_pid148627 00:35:35.838 Removing: /var/run/dpdk/spdk_pid148789 00:35:35.838 Removing: /var/run/dpdk/spdk_pid148863 00:35:35.838 Removing: /var/run/dpdk/spdk_pid149141 00:35:35.838 Removing: /var/run/dpdk/spdk_pid151401 00:35:35.838 Removing: /var/run/dpdk/spdk_pid207438 00:35:35.838 Removing: /var/run/dpdk/spdk_pid210374 00:35:35.838 Removing: /var/run/dpdk/spdk_pid217635 00:35:35.838 Removing: /var/run/dpdk/spdk_pid221042 00:35:35.838 Removing: /var/run/dpdk/spdk_pid223755 00:35:35.838 Removing: /var/run/dpdk/spdk_pid224219 00:35:35.838 Removing: /var/run/dpdk/spdk_pid228700 00:35:35.838 Removing: /var/run/dpdk/spdk_pid228704 00:35:35.838 Removing: /var/run/dpdk/spdk_pid229374 00:35:35.838 Removing: /var/run/dpdk/spdk_pid229993 00:35:35.838 Removing: /var/run/dpdk/spdk_pid230604 00:35:35.838 Removing: /var/run/dpdk/spdk_pid231007 00:35:35.838 Removing: /var/run/dpdk/spdk_pid231019 00:35:35.838 Removing: /var/run/dpdk/spdk_pid231274 00:35:35.838 Removing: /var/run/dpdk/spdk_pid231426 00:35:35.838 Removing: /var/run/dpdk/spdk_pid231428 00:35:35.838 Removing: /var/run/dpdk/spdk_pid232095 00:35:35.838 Removing: /var/run/dpdk/spdk_pid232654 00:35:35.838 Removing: /var/run/dpdk/spdk_pid233321 00:35:35.838 Removing: /var/run/dpdk/spdk_pid233732 00:35:35.838 Removing: /var/run/dpdk/spdk_pid233740 00:35:35.838 Removing: /var/run/dpdk/spdk_pid234002 00:35:35.838 Removing: /var/run/dpdk/spdk_pid235187 00:35:35.838 Removing: /var/run/dpdk/spdk_pid235949 00:35:35.838 Removing: /var/run/dpdk/spdk_pid241519 00:35:35.838 Removing: /var/run/dpdk/spdk_pid241806 00:35:35.838 Removing: /var/run/dpdk/spdk_pid244637 00:35:35.838 Removing: /var/run/dpdk/spdk_pid248680 00:35:35.838 Removing: /var/run/dpdk/spdk_pid250911 00:35:35.838 Removing: /var/run/dpdk/spdk_pid258406 00:35:35.839 Removing: /var/run/dpdk/spdk_pid264009 00:35:35.839 Removing: /var/run/dpdk/spdk_pid265236 00:35:35.839 Removing: /var/run/dpdk/spdk_pid265913 00:35:35.839 Removing: /var/run/dpdk/spdk_pid276971 00:35:35.839 Removing: /var/run/dpdk/spdk_pid279365 00:35:35.839 Removing: /var/run/dpdk/spdk_pid282485 00:35:35.839 Removing: /var/run/dpdk/spdk_pid283690 00:35:35.839 Removing: /var/run/dpdk/spdk_pid285048 00:35:35.839 Removing: /var/run/dpdk/spdk_pid285192 00:35:35.839 Removing: /var/run/dpdk/spdk_pid285470 00:35:35.839 Removing: /var/run/dpdk/spdk_pid285624 00:35:35.839 Removing: /var/run/dpdk/spdk_pid286218 00:35:35.839 Removing: /var/run/dpdk/spdk_pid287767 00:35:35.839 Removing: /var/run/dpdk/spdk_pid289357 00:35:35.839 Removing: /var/run/dpdk/spdk_pid289799 00:35:35.839 Removing: /var/run/dpdk/spdk_pid293429 00:35:35.839 Removing: /var/run/dpdk/spdk_pid297017 00:35:35.839 Removing: /var/run/dpdk/spdk_pid300672 00:35:35.839 Removing: /var/run/dpdk/spdk_pid325132 00:35:35.839 Removing: /var/run/dpdk/spdk_pid327972 00:35:35.839 Removing: /var/run/dpdk/spdk_pid332068 00:35:35.839 Removing: /var/run/dpdk/spdk_pid333045 00:35:35.839 Removing: /var/run/dpdk/spdk_pid334160 00:35:36.098 Removing: /var/run/dpdk/spdk_pid337001 00:35:36.098 Removing: /var/run/dpdk/spdk_pid339413 00:35:36.098 Removing: /var/run/dpdk/spdk_pid343955 00:35:36.098 Removing: /var/run/dpdk/spdk_pid344081 00:35:36.098 Removing: /var/run/dpdk/spdk_pid347174 00:35:36.098 Removing: /var/run/dpdk/spdk_pid347318 00:35:36.098 Removing: /var/run/dpdk/spdk_pid347452 00:35:36.098 Removing: /var/run/dpdk/spdk_pid347732 00:35:36.098 Removing: /var/run/dpdk/spdk_pid347809 00:35:36.098 Removing: /var/run/dpdk/spdk_pid349051 00:35:36.098 Removing: /var/run/dpdk/spdk_pid350422 00:35:36.098 Removing: /var/run/dpdk/spdk_pid352058 00:35:36.098 Removing: /var/run/dpdk/spdk_pid353236 00:35:36.098 Removing: /var/run/dpdk/spdk_pid354458 00:35:36.098 Removing: /var/run/dpdk/spdk_pid355727 00:35:36.098 Removing: /var/run/dpdk/spdk_pid359836 00:35:36.098 Removing: /var/run/dpdk/spdk_pid360289 00:35:36.098 Removing: /var/run/dpdk/spdk_pid361748 00:35:36.098 Removing: /var/run/dpdk/spdk_pid362635 00:35:36.098 Removing: /var/run/dpdk/spdk_pid366696 00:35:36.098 Removing: /var/run/dpdk/spdk_pid368709 00:35:36.098 Removing: /var/run/dpdk/spdk_pid372493 00:35:36.098 Removing: /var/run/dpdk/spdk_pid376270 00:35:36.098 Removing: /var/run/dpdk/spdk_pid380249 00:35:36.098 Removing: /var/run/dpdk/spdk_pid380990 00:35:36.098 Removing: /var/run/dpdk/spdk_pid381540 00:35:36.098 Removing: /var/run/dpdk/spdk_pid382083 00:35:36.098 Removing: /var/run/dpdk/spdk_pid382818 00:35:36.098 Removing: /var/run/dpdk/spdk_pid383339 00:35:36.098 Removing: /var/run/dpdk/spdk_pid383840 00:35:36.098 Removing: /var/run/dpdk/spdk_pid384418 00:35:36.098 Removing: /var/run/dpdk/spdk_pid387063 00:35:36.098 Removing: /var/run/dpdk/spdk_pid387291 00:35:36.098 Removing: /var/run/dpdk/spdk_pid391168 00:35:36.098 Removing: /var/run/dpdk/spdk_pid391350 00:35:36.098 Removing: /var/run/dpdk/spdk_pid393006 00:35:36.098 Removing: /var/run/dpdk/spdk_pid398275 00:35:36.098 Removing: /var/run/dpdk/spdk_pid398282 00:35:36.098 Removing: /var/run/dpdk/spdk_pid401386 00:35:36.098 Removing: /var/run/dpdk/spdk_pid402934 00:35:36.098 Removing: /var/run/dpdk/spdk_pid404366 00:35:36.098 Removing: /var/run/dpdk/spdk_pid405129 00:35:36.098 Removing: /var/run/dpdk/spdk_pid406583 00:35:36.098 Removing: /var/run/dpdk/spdk_pid407488 00:35:36.098 Removing: /var/run/dpdk/spdk_pid413656 00:35:36.098 Removing: /var/run/dpdk/spdk_pid414036 00:35:36.098 Removing: /var/run/dpdk/spdk_pid414439 00:35:36.098 Removing: /var/run/dpdk/spdk_pid416042 00:35:36.098 Removing: /var/run/dpdk/spdk_pid416403 00:35:36.098 Removing: /var/run/dpdk/spdk_pid416731 00:35:36.098 Clean 00:35:36.098 killing process with pid 92298 00:35:48.306 killing process with pid 92295 00:35:48.306 killing process with pid 92297 00:35:48.306 killing process with pid 92296 00:35:48.306 23:49:08 -- common/autotest_common.sh@1436 -- # return 0 00:35:48.306 23:49:08 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:35:48.306 23:49:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:48.306 23:49:08 -- common/autotest_common.sh@10 -- # set +x 00:35:48.306 23:49:08 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:35:48.306 23:49:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:48.306 23:49:08 -- common/autotest_common.sh@10 -- # set +x 00:35:48.306 23:49:08 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:48.306 23:49:08 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:48.306 23:49:08 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:48.306 23:49:08 -- spdk/autotest.sh@394 -- # hash lcov 00:35:48.306 23:49:08 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:48.306 23:49:08 -- spdk/autotest.sh@396 -- # hostname 00:35:48.306 23:49:08 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:48.306 geninfo: WARNING: invalid characters removed from testname! 00:37:09.837 23:50:20 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:09.837 23:50:26 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:09.837 23:50:30 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:14.069 23:50:34 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:18.260 23:50:38 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:22.452 23:50:42 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:26.642 23:50:47 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:26.642 23:50:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:26.642 23:50:47 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:26.642 23:50:47 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:26.642 23:50:47 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:26.642 23:50:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.642 23:50:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.642 23:50:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.642 23:50:47 -- paths/export.sh@5 -- $ export PATH 00:37:26.642 23:50:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.642 23:50:47 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:26.642 23:50:47 -- common/autobuild_common.sh@435 -- $ date +%s 00:37:26.642 23:50:47 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720734647.XXXXXX 00:37:26.642 23:50:47 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720734647.PkrYrs 00:37:26.642 23:50:47 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:37:26.642 23:50:47 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:37:26.642 23:50:47 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:26.642 23:50:47 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:26.642 23:50:47 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:26.642 23:50:47 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:26.642 23:50:47 -- common/autobuild_common.sh@451 -- $ get_config_params 00:37:26.642 23:50:47 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:37:26.642 23:50:47 -- common/autotest_common.sh@10 -- $ set +x 00:37:26.642 23:50:47 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:26.642 23:50:47 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:26.642 23:50:47 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:26.642 23:50:47 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:26.642 23:50:47 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:26.642 23:50:47 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:26.642 23:50:47 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:26.642 23:50:47 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:26.642 23:50:47 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:26.642 23:50:47 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:26.642 23:50:47 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:26.642 + [[ -n 37127 ]] 00:37:26.642 + sudo kill 37127 00:37:26.651 [Pipeline] } 00:37:26.669 [Pipeline] // stage 00:37:26.675 [Pipeline] } 00:37:26.695 [Pipeline] // timeout 00:37:26.700 [Pipeline] } 00:37:26.717 [Pipeline] // catchError 00:37:26.722 [Pipeline] } 00:37:26.739 [Pipeline] // wrap 00:37:26.745 [Pipeline] } 00:37:26.761 [Pipeline] // catchError 00:37:26.770 [Pipeline] stage 00:37:26.772 [Pipeline] { (Epilogue) 00:37:26.786 [Pipeline] catchError 00:37:26.787 [Pipeline] { 00:37:26.802 [Pipeline] echo 00:37:26.804 Cleanup processes 00:37:26.809 [Pipeline] sh 00:37:27.093 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:27.093 429797 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:27.107 [Pipeline] sh 00:37:27.389 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:27.389 ++ grep -v 'sudo pgrep' 00:37:27.389 ++ awk '{print $1}' 00:37:27.389 + sudo kill -9 00:37:27.389 + true 00:37:27.400 [Pipeline] sh 00:37:27.682 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:49.672 [Pipeline] sh 00:37:49.952 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:50.211 Artifacts sizes are good 00:37:50.224 [Pipeline] archiveArtifacts 00:37:50.229 Archiving artifacts 00:37:50.431 [Pipeline] sh 00:37:50.717 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:50.988 [Pipeline] cleanWs 00:37:50.996 [WS-CLEANUP] Deleting project workspace... 00:37:50.997 [WS-CLEANUP] Deferred wipeout is used... 00:37:51.002 [WS-CLEANUP] done 00:37:51.007 [Pipeline] } 00:37:51.030 [Pipeline] // catchError 00:37:51.041 [Pipeline] sh 00:37:51.322 + logger -p user.info -t JENKINS-CI 00:37:51.331 [Pipeline] } 00:37:51.344 [Pipeline] // stage 00:37:51.348 [Pipeline] } 00:37:51.362 [Pipeline] // node 00:37:51.367 [Pipeline] End of Pipeline 00:37:51.400 Finished: SUCCESS